00:00:00.000 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 1060 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3727 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.083 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.084 The recommended git tool is: git 00:00:00.084 using credential 00000000-0000-0000-0000-000000000002 00:00:00.086 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.118 Fetching changes from the remote Git repository 00:00:00.120 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.148 Using shallow fetch with depth 1 00:00:00.148 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.148 > git --version # timeout=10 00:00:00.172 > git --version # 'git version 2.39.2' 00:00:00.172 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.192 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.192 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.213 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.225 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.236 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.236 > git config core.sparsecheckout # timeout=10 00:00:06.247 > git read-tree -mu HEAD # timeout=10 00:00:06.264 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.291 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.291 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.401 [Pipeline] Start of Pipeline 00:00:06.412 [Pipeline] library 00:00:06.413 Loading library shm_lib@master 00:00:06.413 Library shm_lib@master is cached. Copying from home. 00:00:06.426 [Pipeline] node 00:00:06.434 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:06.435 [Pipeline] { 00:00:06.445 [Pipeline] catchError 00:00:06.446 [Pipeline] { 00:00:06.454 [Pipeline] wrap 00:00:06.461 [Pipeline] { 00:00:06.467 [Pipeline] stage 00:00:06.468 [Pipeline] { (Prologue) 00:00:06.482 [Pipeline] echo 00:00:06.483 Node: VM-host-SM0 00:00:06.488 [Pipeline] cleanWs 00:00:06.497 [WS-CLEANUP] Deleting project workspace... 00:00:06.497 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.502 [WS-CLEANUP] done 00:00:06.683 [Pipeline] setCustomBuildProperty 00:00:06.748 [Pipeline] httpRequest 00:00:07.260 [Pipeline] echo 00:00:07.262 Sorcerer 10.211.164.20 is alive 00:00:07.269 [Pipeline] retry 00:00:07.271 [Pipeline] { 00:00:07.284 [Pipeline] httpRequest 00:00:07.288 HttpMethod: GET 00:00:07.289 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.289 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.307 Response Code: HTTP/1.1 200 OK 00:00:07.308 Success: Status code 200 is in the accepted range: 200,404 00:00:07.308 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:33.861 [Pipeline] } 00:00:33.880 [Pipeline] // retry 00:00:33.888 [Pipeline] sh 00:00:34.170 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:34.186 [Pipeline] httpRequest 00:00:34.573 [Pipeline] echo 00:00:34.575 Sorcerer 10.211.164.20 is alive 00:00:34.585 [Pipeline] retry 00:00:34.587 [Pipeline] { 00:00:34.602 [Pipeline] httpRequest 00:00:34.606 HttpMethod: GET 00:00:34.607 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:34.608 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:34.621 Response Code: HTTP/1.1 200 OK 00:00:34.622 Success: Status code 200 is in the accepted range: 200,404 00:00:34.623 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:25.191 [Pipeline] } 00:01:25.209 [Pipeline] // retry 00:01:25.217 [Pipeline] sh 00:01:25.500 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:28.041 [Pipeline] sh 00:01:28.319 + git -C spdk log --oneline -n5 00:01:28.319 c13c99a5e test: Various fixes for Fedora40 00:01:28.319 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:28.319 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:28.319 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:28.319 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:28.336 [Pipeline] withCredentials 00:01:28.345 > git --version # timeout=10 00:01:28.356 > git --version # 'git version 2.39.2' 00:01:28.369 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:28.371 [Pipeline] { 00:01:28.379 [Pipeline] retry 00:01:28.381 [Pipeline] { 00:01:28.394 [Pipeline] sh 00:01:28.672 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:28.682 [Pipeline] } 00:01:28.698 [Pipeline] // retry 00:01:28.703 [Pipeline] } 00:01:28.718 [Pipeline] // withCredentials 00:01:28.726 [Pipeline] httpRequest 00:01:29.103 [Pipeline] echo 00:01:29.105 Sorcerer 10.211.164.20 is alive 00:01:29.114 [Pipeline] retry 00:01:29.116 [Pipeline] { 00:01:29.129 [Pipeline] httpRequest 00:01:29.133 HttpMethod: GET 00:01:29.134 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:29.134 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:29.136 Response Code: HTTP/1.1 200 OK 00:01:29.137 Success: Status code 200 is in the accepted range: 200,404 00:01:29.137 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:33.751 [Pipeline] } 00:01:33.769 [Pipeline] // retry 00:01:33.776 [Pipeline] sh 00:01:34.113 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:35.502 [Pipeline] sh 00:01:35.783 + git -C dpdk log --oneline -n5 00:01:35.783 eeb0605f11 version: 23.11.0 00:01:35.784 238778122a doc: update release notes for 23.11 00:01:35.784 46aa6b3cfc doc: fix description of RSS features 00:01:35.784 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:35.784 7e421ae345 devtools: support skipping forbid rule check 00:01:35.800 [Pipeline] writeFile 00:01:35.815 [Pipeline] sh 00:01:36.094 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:36.105 [Pipeline] sh 00:01:36.385 + cat autorun-spdk.conf 00:01:36.385 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.385 SPDK_TEST_NVMF=1 00:01:36.385 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:36.385 SPDK_TEST_USDT=1 00:01:36.385 SPDK_RUN_UBSAN=1 00:01:36.385 SPDK_TEST_NVMF_MDNS=1 00:01:36.385 NET_TYPE=virt 00:01:36.385 SPDK_JSONRPC_GO_CLIENT=1 00:01:36.385 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:36.385 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:36.385 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:36.392 RUN_NIGHTLY=1 00:01:36.394 [Pipeline] } 00:01:36.407 [Pipeline] // stage 00:01:36.420 [Pipeline] stage 00:01:36.422 [Pipeline] { (Run VM) 00:01:36.434 [Pipeline] sh 00:01:36.714 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:36.714 + echo 'Start stage prepare_nvme.sh' 00:01:36.714 Start stage prepare_nvme.sh 00:01:36.714 + [[ -n 6 ]] 00:01:36.714 + disk_prefix=ex6 00:01:36.714 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:36.714 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:36.714 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:36.714 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.714 ++ SPDK_TEST_NVMF=1 00:01:36.714 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:36.714 ++ SPDK_TEST_USDT=1 00:01:36.714 ++ SPDK_RUN_UBSAN=1 00:01:36.714 ++ SPDK_TEST_NVMF_MDNS=1 00:01:36.714 ++ NET_TYPE=virt 00:01:36.714 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:36.714 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:36.714 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:36.714 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:36.714 ++ RUN_NIGHTLY=1 00:01:36.714 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:36.714 + nvme_files=() 00:01:36.714 + declare -A nvme_files 00:01:36.714 + backend_dir=/var/lib/libvirt/images/backends 00:01:36.714 + nvme_files['nvme.img']=5G 00:01:36.714 + nvme_files['nvme-cmb.img']=5G 00:01:36.714 + nvme_files['nvme-multi0.img']=4G 00:01:36.714 + nvme_files['nvme-multi1.img']=4G 00:01:36.714 + nvme_files['nvme-multi2.img']=4G 00:01:36.714 + nvme_files['nvme-openstack.img']=8G 00:01:36.714 + nvme_files['nvme-zns.img']=5G 00:01:36.714 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:36.714 + (( SPDK_TEST_FTL == 1 )) 00:01:36.714 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:36.715 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:36.715 + for nvme in "${!nvme_files[@]}" 00:01:36.715 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:01:36.715 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:36.715 + for nvme in "${!nvme_files[@]}" 00:01:36.715 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:01:36.715 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:36.715 + for nvme in "${!nvme_files[@]}" 00:01:36.715 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:01:36.715 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:36.715 + for nvme in "${!nvme_files[@]}" 00:01:36.715 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:01:36.715 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:36.715 + for nvme in "${!nvme_files[@]}" 00:01:36.715 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:01:36.715 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:36.715 + for nvme in "${!nvme_files[@]}" 00:01:36.715 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:01:36.715 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:36.715 + for nvme in "${!nvme_files[@]}" 00:01:36.715 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:01:36.974 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:36.974 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:01:36.974 + echo 'End stage prepare_nvme.sh' 00:01:36.974 End stage prepare_nvme.sh 00:01:36.986 [Pipeline] sh 00:01:37.268 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:37.268 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora39 00:01:37.268 00:01:37.268 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:37.268 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:37.268 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:37.268 HELP=0 00:01:37.268 DRY_RUN=0 00:01:37.268 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:01:37.268 NVME_DISKS_TYPE=nvme,nvme, 00:01:37.268 NVME_AUTO_CREATE=0 00:01:37.268 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:01:37.268 NVME_CMB=,, 00:01:37.268 NVME_PMR=,, 00:01:37.268 NVME_ZNS=,, 00:01:37.268 NVME_MS=,, 00:01:37.268 NVME_FDP=,, 00:01:37.268 SPDK_VAGRANT_DISTRO=fedora39 00:01:37.268 SPDK_VAGRANT_VMCPU=10 00:01:37.268 SPDK_VAGRANT_VMRAM=12288 00:01:37.268 SPDK_VAGRANT_PROVIDER=libvirt 00:01:37.268 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:37.268 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:37.268 SPDK_OPENSTACK_NETWORK=0 00:01:37.268 VAGRANT_PACKAGE_BOX=0 00:01:37.268 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:37.268 FORCE_DISTRO=true 00:01:37.268 VAGRANT_BOX_VERSION= 00:01:37.268 EXTRA_VAGRANTFILES= 00:01:37.268 NIC_MODEL=e1000 00:01:37.268 00:01:37.268 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:01:37.268 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:39.803 Bringing machine 'default' up with 'libvirt' provider... 00:01:40.370 ==> default: Creating image (snapshot of base box volume). 00:01:40.370 ==> default: Creating domain with the following settings... 00:01:40.370 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734268545_f4a488e194a6bc9d435c 00:01:40.370 ==> default: -- Domain type: kvm 00:01:40.370 ==> default: -- Cpus: 10 00:01:40.370 ==> default: -- Feature: acpi 00:01:40.370 ==> default: -- Feature: apic 00:01:40.370 ==> default: -- Feature: pae 00:01:40.370 ==> default: -- Memory: 12288M 00:01:40.370 ==> default: -- Memory Backing: hugepages: 00:01:40.370 ==> default: -- Management MAC: 00:01:40.370 ==> default: -- Loader: 00:01:40.370 ==> default: -- Nvram: 00:01:40.370 ==> default: -- Base box: spdk/fedora39 00:01:40.370 ==> default: -- Storage pool: default 00:01:40.370 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734268545_f4a488e194a6bc9d435c.img (20G) 00:01:40.370 ==> default: -- Volume Cache: default 00:01:40.370 ==> default: -- Kernel: 00:01:40.370 ==> default: -- Initrd: 00:01:40.370 ==> default: -- Graphics Type: vnc 00:01:40.370 ==> default: -- Graphics Port: -1 00:01:40.370 ==> default: -- Graphics IP: 127.0.0.1 00:01:40.370 ==> default: -- Graphics Password: Not defined 00:01:40.370 ==> default: -- Video Type: cirrus 00:01:40.370 ==> default: -- Video VRAM: 9216 00:01:40.370 ==> default: -- Sound Type: 00:01:40.370 ==> default: -- Keymap: en-us 00:01:40.370 ==> default: -- TPM Path: 00:01:40.370 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:40.370 ==> default: -- Command line args: 00:01:40.370 ==> default: -> value=-device, 00:01:40.370 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:40.370 ==> default: -> value=-drive, 00:01:40.371 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:01:40.371 ==> default: -> value=-device, 00:01:40.371 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:40.371 ==> default: -> value=-device, 00:01:40.371 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:40.371 ==> default: -> value=-drive, 00:01:40.371 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:40.371 ==> default: -> value=-device, 00:01:40.371 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:40.371 ==> default: -> value=-drive, 00:01:40.371 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:40.371 ==> default: -> value=-device, 00:01:40.371 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:40.371 ==> default: -> value=-drive, 00:01:40.371 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:40.371 ==> default: -> value=-device, 00:01:40.371 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:40.630 ==> default: Creating shared folders metadata... 00:01:40.630 ==> default: Starting domain. 00:01:42.532 ==> default: Waiting for domain to get an IP address... 00:02:00.643 ==> default: Waiting for SSH to become available... 00:02:00.643 ==> default: Configuring and enabling network interfaces... 00:02:04.832 default: SSH address: 192.168.121.29:22 00:02:04.832 default: SSH username: vagrant 00:02:04.832 default: SSH auth method: private key 00:02:06.734 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:13.300 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:19.866 ==> default: Mounting SSHFS shared folder... 00:02:21.243 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:21.243 ==> default: Checking Mount.. 00:02:22.179 ==> default: Folder Successfully Mounted! 00:02:22.179 ==> default: Running provisioner: file... 00:02:23.115 default: ~/.gitconfig => .gitconfig 00:02:23.684 00:02:23.684 SUCCESS! 00:02:23.684 00:02:23.684 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:23.684 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:23.684 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:23.684 00:02:23.692 [Pipeline] } 00:02:23.708 [Pipeline] // stage 00:02:23.717 [Pipeline] dir 00:02:23.718 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:02:23.720 [Pipeline] { 00:02:23.732 [Pipeline] catchError 00:02:23.734 [Pipeline] { 00:02:23.747 [Pipeline] sh 00:02:24.023 + vagrant ssh-config --host vagrant 00:02:24.023 + sed -ne /^Host/,$p 00:02:24.023 + tee ssh_conf 00:02:27.311 Host vagrant 00:02:27.311 HostName 192.168.121.29 00:02:27.311 User vagrant 00:02:27.311 Port 22 00:02:27.311 UserKnownHostsFile /dev/null 00:02:27.311 StrictHostKeyChecking no 00:02:27.311 PasswordAuthentication no 00:02:27.311 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:27.311 IdentitiesOnly yes 00:02:27.311 LogLevel FATAL 00:02:27.311 ForwardAgent yes 00:02:27.311 ForwardX11 yes 00:02:27.311 00:02:27.323 [Pipeline] withEnv 00:02:27.325 [Pipeline] { 00:02:27.338 [Pipeline] sh 00:02:27.618 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:27.618 source /etc/os-release 00:02:27.618 [[ -e /image.version ]] && img=$(< /image.version) 00:02:27.618 # Minimal, systemd-like check. 00:02:27.618 if [[ -e /.dockerenv ]]; then 00:02:27.618 # Clear garbage from the node's name: 00:02:27.618 # agt-er_autotest_547-896 -> autotest_547-896 00:02:27.618 # $HOSTNAME is the actual container id 00:02:27.618 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:27.618 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:27.618 # We can assume this is a mount from a host where container is running, 00:02:27.618 # so fetch its hostname to easily identify the target swarm worker. 00:02:27.618 container="$(< /etc/hostname) ($agent)" 00:02:27.618 else 00:02:27.618 # Fallback 00:02:27.618 container=$agent 00:02:27.618 fi 00:02:27.618 fi 00:02:27.618 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:27.618 00:02:27.888 [Pipeline] } 00:02:27.905 [Pipeline] // withEnv 00:02:27.913 [Pipeline] setCustomBuildProperty 00:02:27.927 [Pipeline] stage 00:02:27.929 [Pipeline] { (Tests) 00:02:27.946 [Pipeline] sh 00:02:28.225 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:28.497 [Pipeline] sh 00:02:28.777 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:29.050 [Pipeline] timeout 00:02:29.050 Timeout set to expire in 1 hr 0 min 00:02:29.052 [Pipeline] { 00:02:29.067 [Pipeline] sh 00:02:29.347 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:29.922 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:29.933 [Pipeline] sh 00:02:30.213 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:30.485 [Pipeline] sh 00:02:30.777 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:30.805 [Pipeline] sh 00:02:31.085 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:31.345 ++ readlink -f spdk_repo 00:02:31.345 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:31.345 + [[ -n /home/vagrant/spdk_repo ]] 00:02:31.345 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:31.345 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:31.345 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:31.345 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:31.345 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:31.345 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:31.345 + cd /home/vagrant/spdk_repo 00:02:31.345 + source /etc/os-release 00:02:31.345 ++ NAME='Fedora Linux' 00:02:31.345 ++ VERSION='39 (Cloud Edition)' 00:02:31.345 ++ ID=fedora 00:02:31.345 ++ VERSION_ID=39 00:02:31.345 ++ VERSION_CODENAME= 00:02:31.345 ++ PLATFORM_ID=platform:f39 00:02:31.345 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:31.345 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:31.345 ++ LOGO=fedora-logo-icon 00:02:31.345 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:31.345 ++ HOME_URL=https://fedoraproject.org/ 00:02:31.345 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:31.345 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:31.345 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:31.345 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:31.345 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:31.345 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:31.345 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:31.345 ++ SUPPORT_END=2024-11-12 00:02:31.345 ++ VARIANT='Cloud Edition' 00:02:31.345 ++ VARIANT_ID=cloud 00:02:31.345 + uname -a 00:02:31.345 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:31.345 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:31.345 Hugepages 00:02:31.345 node hugesize free / total 00:02:31.345 node0 1048576kB 0 / 0 00:02:31.345 node0 2048kB 0 / 0 00:02:31.345 00:02:31.345 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:31.345 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:31.345 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:31.345 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:31.345 + rm -f /tmp/spdk-ld-path 00:02:31.345 + source autorun-spdk.conf 00:02:31.345 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:31.345 ++ SPDK_TEST_NVMF=1 00:02:31.345 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:31.345 ++ SPDK_TEST_USDT=1 00:02:31.345 ++ SPDK_RUN_UBSAN=1 00:02:31.345 ++ SPDK_TEST_NVMF_MDNS=1 00:02:31.345 ++ NET_TYPE=virt 00:02:31.345 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:31.345 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:31.345 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:31.345 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:31.345 ++ RUN_NIGHTLY=1 00:02:31.345 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:31.345 + [[ -n '' ]] 00:02:31.345 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:31.604 + for M in /var/spdk/build-*-manifest.txt 00:02:31.604 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:31.604 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:31.604 + for M in /var/spdk/build-*-manifest.txt 00:02:31.604 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:31.604 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:31.604 + for M in /var/spdk/build-*-manifest.txt 00:02:31.604 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:31.604 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:31.604 ++ uname 00:02:31.604 + [[ Linux == \L\i\n\u\x ]] 00:02:31.604 + sudo dmesg -T 00:02:31.604 + sudo dmesg --clear 00:02:31.604 + dmesg_pid=5967 00:02:31.604 + sudo dmesg -Tw 00:02:31.604 + [[ Fedora Linux == FreeBSD ]] 00:02:31.605 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:31.605 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:31.605 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:31.605 + [[ -x /usr/src/fio-static/fio ]] 00:02:31.605 + export FIO_BIN=/usr/src/fio-static/fio 00:02:31.605 + FIO_BIN=/usr/src/fio-static/fio 00:02:31.605 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:31.605 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:31.605 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:31.605 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:31.605 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:31.605 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:31.605 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:31.605 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:31.605 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:31.605 Test configuration: 00:02:31.605 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:31.605 SPDK_TEST_NVMF=1 00:02:31.605 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:31.605 SPDK_TEST_USDT=1 00:02:31.605 SPDK_RUN_UBSAN=1 00:02:31.605 SPDK_TEST_NVMF_MDNS=1 00:02:31.605 NET_TYPE=virt 00:02:31.605 SPDK_JSONRPC_GO_CLIENT=1 00:02:31.605 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:31.605 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:31.605 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:31.605 RUN_NIGHTLY=1 13:16:37 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:31.605 13:16:37 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:31.605 13:16:37 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:31.605 13:16:37 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:31.605 13:16:37 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:31.605 13:16:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.605 13:16:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.605 13:16:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.605 13:16:37 -- paths/export.sh@5 -- $ export PATH 00:02:31.605 13:16:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.605 13:16:37 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:31.605 13:16:37 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:31.605 13:16:37 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734268597.XXXXXX 00:02:31.605 13:16:37 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734268597.VkXB3f 00:02:31.605 13:16:37 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:31.605 13:16:37 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:02:31.605 13:16:37 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:31.605 13:16:37 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:31.605 13:16:37 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:31.605 13:16:37 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:31.605 13:16:37 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:31.605 13:16:37 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:31.605 13:16:37 -- common/autotest_common.sh@10 -- $ set +x 00:02:31.605 13:16:37 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:02:31.605 13:16:37 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:31.605 13:16:37 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:31.605 13:16:37 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:31.605 13:16:37 -- spdk/autobuild.sh@16 -- $ date -u 00:02:31.605 Sun Dec 15 01:16:37 PM UTC 2024 00:02:31.605 13:16:37 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:31.605 LTS-67-gc13c99a5e 00:02:31.605 13:16:37 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:31.605 13:16:37 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:31.605 13:16:37 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:31.605 13:16:37 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:31.605 13:16:37 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:31.605 13:16:37 -- common/autotest_common.sh@10 -- $ set +x 00:02:31.605 ************************************ 00:02:31.605 START TEST ubsan 00:02:31.605 ************************************ 00:02:31.605 using ubsan 00:02:31.605 13:16:37 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:31.605 00:02:31.605 real 0m0.000s 00:02:31.605 user 0m0.000s 00:02:31.605 sys 0m0.000s 00:02:31.605 13:16:37 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:31.605 ************************************ 00:02:31.605 13:16:37 -- common/autotest_common.sh@10 -- $ set +x 00:02:31.605 END TEST ubsan 00:02:31.605 ************************************ 00:02:31.864 13:16:37 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:31.864 13:16:37 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:31.864 13:16:37 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:31.864 13:16:37 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:02:31.864 13:16:37 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:31.864 13:16:37 -- common/autotest_common.sh@10 -- $ set +x 00:02:31.864 ************************************ 00:02:31.864 START TEST build_native_dpdk 00:02:31.864 ************************************ 00:02:31.864 13:16:37 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:02:31.864 13:16:37 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:31.864 13:16:37 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:31.864 13:16:37 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:31.864 13:16:37 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:31.864 13:16:37 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:31.864 13:16:37 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:31.864 13:16:37 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:31.864 13:16:37 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:31.864 13:16:37 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:31.864 13:16:37 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:31.864 13:16:37 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:31.864 13:16:37 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:31.864 13:16:37 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:31.864 13:16:37 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:31.864 13:16:37 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:31.864 13:16:37 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:31.864 13:16:37 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:31.864 13:16:37 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:31.864 13:16:37 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:31.864 13:16:37 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:31.864 eeb0605f11 version: 23.11.0 00:02:31.864 238778122a doc: update release notes for 23.11 00:02:31.864 46aa6b3cfc doc: fix description of RSS features 00:02:31.865 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:31.865 7e421ae345 devtools: support skipping forbid rule check 00:02:31.865 13:16:37 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:31.865 13:16:37 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:31.865 13:16:37 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:31.865 13:16:37 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:31.865 13:16:37 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:31.865 13:16:37 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:31.865 13:16:37 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:31.865 13:16:37 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:31.865 13:16:37 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:31.865 13:16:37 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:31.865 13:16:37 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:31.865 13:16:37 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:31.865 13:16:37 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:31.865 13:16:37 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:31.865 13:16:37 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:31.865 13:16:37 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:31.865 13:16:37 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:31.865 13:16:37 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:31.865 13:16:37 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:31.865 13:16:37 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:31.865 13:16:37 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:31.865 13:16:37 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:31.865 13:16:37 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:31.865 13:16:37 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:31.865 13:16:37 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:31.865 13:16:37 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:31.865 13:16:37 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:31.865 13:16:37 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:31.865 13:16:37 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:31.865 13:16:37 -- scripts/common.sh@343 -- $ case "$op" in 00:02:31.865 13:16:37 -- scripts/common.sh@344 -- $ : 1 00:02:31.865 13:16:37 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:31.865 13:16:37 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:31.865 13:16:37 -- scripts/common.sh@364 -- $ decimal 23 00:02:31.865 13:16:37 -- scripts/common.sh@352 -- $ local d=23 00:02:31.865 13:16:37 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:31.865 13:16:37 -- scripts/common.sh@354 -- $ echo 23 00:02:31.865 13:16:37 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:31.865 13:16:37 -- scripts/common.sh@365 -- $ decimal 21 00:02:31.865 13:16:37 -- scripts/common.sh@352 -- $ local d=21 00:02:31.865 13:16:37 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:31.865 13:16:37 -- scripts/common.sh@354 -- $ echo 21 00:02:31.865 13:16:37 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:31.865 13:16:37 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:31.865 13:16:37 -- scripts/common.sh@366 -- $ return 1 00:02:31.865 13:16:37 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:31.865 patching file config/rte_config.h 00:02:31.865 Hunk #1 succeeded at 60 (offset 1 line). 00:02:31.865 13:16:37 -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:31.865 13:16:37 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:31.865 13:16:37 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:31.865 13:16:37 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:31.865 13:16:37 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:31.865 13:16:37 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:31.865 13:16:37 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:31.865 13:16:37 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:31.865 13:16:37 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:31.865 13:16:37 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:31.865 13:16:37 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:31.865 13:16:37 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:31.865 13:16:37 -- scripts/common.sh@343 -- $ case "$op" in 00:02:31.865 13:16:37 -- scripts/common.sh@344 -- $ : 1 00:02:31.865 13:16:37 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:31.865 13:16:37 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:31.865 13:16:37 -- scripts/common.sh@364 -- $ decimal 23 00:02:31.865 13:16:37 -- scripts/common.sh@352 -- $ local d=23 00:02:31.865 13:16:37 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:31.865 13:16:37 -- scripts/common.sh@354 -- $ echo 23 00:02:31.865 13:16:37 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:31.865 13:16:37 -- scripts/common.sh@365 -- $ decimal 24 00:02:31.865 13:16:37 -- scripts/common.sh@352 -- $ local d=24 00:02:31.865 13:16:37 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:31.865 13:16:37 -- scripts/common.sh@354 -- $ echo 24 00:02:31.865 13:16:37 -- scripts/common.sh@365 -- $ ver2[v]=24 00:02:31.865 13:16:37 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:31.865 13:16:37 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:02:31.865 13:16:37 -- scripts/common.sh@367 -- $ return 0 00:02:31.865 13:16:37 -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:31.865 patching file lib/pcapng/rte_pcapng.c 00:02:31.865 13:16:37 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:31.865 13:16:37 -- common/autobuild_common.sh@181 -- $ uname -s 00:02:31.865 13:16:37 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:31.865 13:16:37 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:31.865 13:16:37 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:37.137 The Meson build system 00:02:37.137 Version: 1.5.0 00:02:37.137 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:37.137 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:37.137 Build type: native build 00:02:37.137 Program cat found: YES (/usr/bin/cat) 00:02:37.137 Project name: DPDK 00:02:37.137 Project version: 23.11.0 00:02:37.137 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:37.137 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:37.137 Host machine cpu family: x86_64 00:02:37.137 Host machine cpu: x86_64 00:02:37.137 Message: ## Building in Developer Mode ## 00:02:37.137 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:37.137 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:37.137 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:37.137 Program python3 found: YES (/usr/bin/python3) 00:02:37.137 Program cat found: YES (/usr/bin/cat) 00:02:37.137 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:37.137 Compiler for C supports arguments -march=native: YES 00:02:37.137 Checking for size of "void *" : 8 00:02:37.137 Checking for size of "void *" : 8 (cached) 00:02:37.137 Library m found: YES 00:02:37.137 Library numa found: YES 00:02:37.137 Has header "numaif.h" : YES 00:02:37.137 Library fdt found: NO 00:02:37.137 Library execinfo found: NO 00:02:37.137 Has header "execinfo.h" : YES 00:02:37.137 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:37.137 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:37.137 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:37.137 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:37.137 Run-time dependency openssl found: YES 3.1.1 00:02:37.137 Run-time dependency libpcap found: YES 1.10.4 00:02:37.137 Has header "pcap.h" with dependency libpcap: YES 00:02:37.137 Compiler for C supports arguments -Wcast-qual: YES 00:02:37.137 Compiler for C supports arguments -Wdeprecated: YES 00:02:37.137 Compiler for C supports arguments -Wformat: YES 00:02:37.137 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:37.137 Compiler for C supports arguments -Wformat-security: NO 00:02:37.137 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:37.137 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:37.137 Compiler for C supports arguments -Wnested-externs: YES 00:02:37.137 Compiler for C supports arguments -Wold-style-definition: YES 00:02:37.137 Compiler for C supports arguments -Wpointer-arith: YES 00:02:37.137 Compiler for C supports arguments -Wsign-compare: YES 00:02:37.138 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:37.138 Compiler for C supports arguments -Wundef: YES 00:02:37.138 Compiler for C supports arguments -Wwrite-strings: YES 00:02:37.138 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:37.138 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:37.138 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:37.138 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:37.138 Program objdump found: YES (/usr/bin/objdump) 00:02:37.138 Compiler for C supports arguments -mavx512f: YES 00:02:37.138 Checking if "AVX512 checking" compiles: YES 00:02:37.138 Fetching value of define "__SSE4_2__" : 1 00:02:37.138 Fetching value of define "__AES__" : 1 00:02:37.138 Fetching value of define "__AVX__" : 1 00:02:37.138 Fetching value of define "__AVX2__" : 1 00:02:37.138 Fetching value of define "__AVX512BW__" : (undefined) 00:02:37.138 Fetching value of define "__AVX512CD__" : (undefined) 00:02:37.138 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:37.138 Fetching value of define "__AVX512F__" : (undefined) 00:02:37.138 Fetching value of define "__AVX512VL__" : (undefined) 00:02:37.138 Fetching value of define "__PCLMUL__" : 1 00:02:37.138 Fetching value of define "__RDRND__" : 1 00:02:37.138 Fetching value of define "__RDSEED__" : 1 00:02:37.138 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:37.138 Fetching value of define "__znver1__" : (undefined) 00:02:37.138 Fetching value of define "__znver2__" : (undefined) 00:02:37.138 Fetching value of define "__znver3__" : (undefined) 00:02:37.138 Fetching value of define "__znver4__" : (undefined) 00:02:37.138 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:37.138 Message: lib/log: Defining dependency "log" 00:02:37.138 Message: lib/kvargs: Defining dependency "kvargs" 00:02:37.138 Message: lib/telemetry: Defining dependency "telemetry" 00:02:37.138 Checking for function "getentropy" : NO 00:02:37.138 Message: lib/eal: Defining dependency "eal" 00:02:37.138 Message: lib/ring: Defining dependency "ring" 00:02:37.138 Message: lib/rcu: Defining dependency "rcu" 00:02:37.138 Message: lib/mempool: Defining dependency "mempool" 00:02:37.138 Message: lib/mbuf: Defining dependency "mbuf" 00:02:37.138 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:37.138 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:37.138 Compiler for C supports arguments -mpclmul: YES 00:02:37.138 Compiler for C supports arguments -maes: YES 00:02:37.138 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:37.138 Compiler for C supports arguments -mavx512bw: YES 00:02:37.138 Compiler for C supports arguments -mavx512dq: YES 00:02:37.138 Compiler for C supports arguments -mavx512vl: YES 00:02:37.138 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:37.138 Compiler for C supports arguments -mavx2: YES 00:02:37.138 Compiler for C supports arguments -mavx: YES 00:02:37.138 Message: lib/net: Defining dependency "net" 00:02:37.138 Message: lib/meter: Defining dependency "meter" 00:02:37.138 Message: lib/ethdev: Defining dependency "ethdev" 00:02:37.138 Message: lib/pci: Defining dependency "pci" 00:02:37.138 Message: lib/cmdline: Defining dependency "cmdline" 00:02:37.138 Message: lib/metrics: Defining dependency "metrics" 00:02:37.138 Message: lib/hash: Defining dependency "hash" 00:02:37.138 Message: lib/timer: Defining dependency "timer" 00:02:37.138 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:37.138 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:37.138 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:37.138 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:37.138 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:37.138 Message: lib/acl: Defining dependency "acl" 00:02:37.138 Message: lib/bbdev: Defining dependency "bbdev" 00:02:37.138 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:37.138 Run-time dependency libelf found: YES 0.191 00:02:37.138 Message: lib/bpf: Defining dependency "bpf" 00:02:37.138 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:37.138 Message: lib/compressdev: Defining dependency "compressdev" 00:02:37.138 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:37.138 Message: lib/distributor: Defining dependency "distributor" 00:02:37.138 Message: lib/dmadev: Defining dependency "dmadev" 00:02:37.138 Message: lib/efd: Defining dependency "efd" 00:02:37.138 Message: lib/eventdev: Defining dependency "eventdev" 00:02:37.138 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:37.138 Message: lib/gpudev: Defining dependency "gpudev" 00:02:37.138 Message: lib/gro: Defining dependency "gro" 00:02:37.138 Message: lib/gso: Defining dependency "gso" 00:02:37.138 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:37.138 Message: lib/jobstats: Defining dependency "jobstats" 00:02:37.138 Message: lib/latencystats: Defining dependency "latencystats" 00:02:37.138 Message: lib/lpm: Defining dependency "lpm" 00:02:37.138 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:37.138 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:37.138 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:37.138 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:37.138 Message: lib/member: Defining dependency "member" 00:02:37.138 Message: lib/pcapng: Defining dependency "pcapng" 00:02:37.138 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:37.138 Message: lib/power: Defining dependency "power" 00:02:37.138 Message: lib/rawdev: Defining dependency "rawdev" 00:02:37.138 Message: lib/regexdev: Defining dependency "regexdev" 00:02:37.138 Message: lib/mldev: Defining dependency "mldev" 00:02:37.138 Message: lib/rib: Defining dependency "rib" 00:02:37.138 Message: lib/reorder: Defining dependency "reorder" 00:02:37.138 Message: lib/sched: Defining dependency "sched" 00:02:37.138 Message: lib/security: Defining dependency "security" 00:02:37.138 Message: lib/stack: Defining dependency "stack" 00:02:37.138 Has header "linux/userfaultfd.h" : YES 00:02:37.138 Has header "linux/vduse.h" : YES 00:02:37.138 Message: lib/vhost: Defining dependency "vhost" 00:02:37.138 Message: lib/ipsec: Defining dependency "ipsec" 00:02:37.138 Message: lib/pdcp: Defining dependency "pdcp" 00:02:37.138 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:37.138 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:37.138 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:37.138 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:37.138 Message: lib/fib: Defining dependency "fib" 00:02:37.138 Message: lib/port: Defining dependency "port" 00:02:37.138 Message: lib/pdump: Defining dependency "pdump" 00:02:37.138 Message: lib/table: Defining dependency "table" 00:02:37.138 Message: lib/pipeline: Defining dependency "pipeline" 00:02:37.138 Message: lib/graph: Defining dependency "graph" 00:02:37.138 Message: lib/node: Defining dependency "node" 00:02:37.138 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:39.043 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:39.043 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:39.043 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:39.043 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:39.043 Compiler for C supports arguments -Wno-unused-value: YES 00:02:39.043 Compiler for C supports arguments -Wno-format: YES 00:02:39.043 Compiler for C supports arguments -Wno-format-security: YES 00:02:39.043 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:39.043 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:39.043 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:39.043 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:39.043 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:39.043 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:39.043 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:39.043 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:39.043 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:39.043 Has header "sys/epoll.h" : YES 00:02:39.043 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:39.043 Configuring doxy-api-html.conf using configuration 00:02:39.043 Configuring doxy-api-man.conf using configuration 00:02:39.043 Program mandb found: YES (/usr/bin/mandb) 00:02:39.043 Program sphinx-build found: NO 00:02:39.043 Configuring rte_build_config.h using configuration 00:02:39.043 Message: 00:02:39.043 ================= 00:02:39.043 Applications Enabled 00:02:39.043 ================= 00:02:39.043 00:02:39.043 apps: 00:02:39.043 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:39.043 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:39.043 test-pmd, test-regex, test-sad, test-security-perf, 00:02:39.043 00:02:39.043 Message: 00:02:39.043 ================= 00:02:39.043 Libraries Enabled 00:02:39.043 ================= 00:02:39.043 00:02:39.043 libs: 00:02:39.043 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:39.043 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:39.043 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:39.043 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:39.043 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:39.043 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:39.043 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:39.043 00:02:39.043 00:02:39.043 Message: 00:02:39.043 =============== 00:02:39.043 Drivers Enabled 00:02:39.043 =============== 00:02:39.043 00:02:39.043 common: 00:02:39.043 00:02:39.043 bus: 00:02:39.043 pci, vdev, 00:02:39.043 mempool: 00:02:39.043 ring, 00:02:39.043 dma: 00:02:39.043 00:02:39.043 net: 00:02:39.043 i40e, 00:02:39.043 raw: 00:02:39.043 00:02:39.043 crypto: 00:02:39.043 00:02:39.043 compress: 00:02:39.043 00:02:39.043 regex: 00:02:39.043 00:02:39.043 ml: 00:02:39.043 00:02:39.043 vdpa: 00:02:39.043 00:02:39.043 event: 00:02:39.043 00:02:39.043 baseband: 00:02:39.043 00:02:39.043 gpu: 00:02:39.043 00:02:39.043 00:02:39.043 Message: 00:02:39.043 ================= 00:02:39.043 Content Skipped 00:02:39.043 ================= 00:02:39.043 00:02:39.043 apps: 00:02:39.043 00:02:39.043 libs: 00:02:39.043 00:02:39.043 drivers: 00:02:39.043 common/cpt: not in enabled drivers build config 00:02:39.043 common/dpaax: not in enabled drivers build config 00:02:39.043 common/iavf: not in enabled drivers build config 00:02:39.043 common/idpf: not in enabled drivers build config 00:02:39.043 common/mvep: not in enabled drivers build config 00:02:39.043 common/octeontx: not in enabled drivers build config 00:02:39.043 bus/auxiliary: not in enabled drivers build config 00:02:39.043 bus/cdx: not in enabled drivers build config 00:02:39.043 bus/dpaa: not in enabled drivers build config 00:02:39.043 bus/fslmc: not in enabled drivers build config 00:02:39.043 bus/ifpga: not in enabled drivers build config 00:02:39.043 bus/platform: not in enabled drivers build config 00:02:39.043 bus/vmbus: not in enabled drivers build config 00:02:39.043 common/cnxk: not in enabled drivers build config 00:02:39.043 common/mlx5: not in enabled drivers build config 00:02:39.043 common/nfp: not in enabled drivers build config 00:02:39.043 common/qat: not in enabled drivers build config 00:02:39.043 common/sfc_efx: not in enabled drivers build config 00:02:39.043 mempool/bucket: not in enabled drivers build config 00:02:39.043 mempool/cnxk: not in enabled drivers build config 00:02:39.043 mempool/dpaa: not in enabled drivers build config 00:02:39.043 mempool/dpaa2: not in enabled drivers build config 00:02:39.043 mempool/octeontx: not in enabled drivers build config 00:02:39.043 mempool/stack: not in enabled drivers build config 00:02:39.043 dma/cnxk: not in enabled drivers build config 00:02:39.043 dma/dpaa: not in enabled drivers build config 00:02:39.043 dma/dpaa2: not in enabled drivers build config 00:02:39.043 dma/hisilicon: not in enabled drivers build config 00:02:39.043 dma/idxd: not in enabled drivers build config 00:02:39.043 dma/ioat: not in enabled drivers build config 00:02:39.043 dma/skeleton: not in enabled drivers build config 00:02:39.043 net/af_packet: not in enabled drivers build config 00:02:39.043 net/af_xdp: not in enabled drivers build config 00:02:39.043 net/ark: not in enabled drivers build config 00:02:39.043 net/atlantic: not in enabled drivers build config 00:02:39.043 net/avp: not in enabled drivers build config 00:02:39.043 net/axgbe: not in enabled drivers build config 00:02:39.043 net/bnx2x: not in enabled drivers build config 00:02:39.043 net/bnxt: not in enabled drivers build config 00:02:39.043 net/bonding: not in enabled drivers build config 00:02:39.043 net/cnxk: not in enabled drivers build config 00:02:39.043 net/cpfl: not in enabled drivers build config 00:02:39.043 net/cxgbe: not in enabled drivers build config 00:02:39.043 net/dpaa: not in enabled drivers build config 00:02:39.043 net/dpaa2: not in enabled drivers build config 00:02:39.043 net/e1000: not in enabled drivers build config 00:02:39.043 net/ena: not in enabled drivers build config 00:02:39.043 net/enetc: not in enabled drivers build config 00:02:39.043 net/enetfec: not in enabled drivers build config 00:02:39.043 net/enic: not in enabled drivers build config 00:02:39.043 net/failsafe: not in enabled drivers build config 00:02:39.043 net/fm10k: not in enabled drivers build config 00:02:39.043 net/gve: not in enabled drivers build config 00:02:39.043 net/hinic: not in enabled drivers build config 00:02:39.043 net/hns3: not in enabled drivers build config 00:02:39.043 net/iavf: not in enabled drivers build config 00:02:39.043 net/ice: not in enabled drivers build config 00:02:39.043 net/idpf: not in enabled drivers build config 00:02:39.043 net/igc: not in enabled drivers build config 00:02:39.043 net/ionic: not in enabled drivers build config 00:02:39.043 net/ipn3ke: not in enabled drivers build config 00:02:39.043 net/ixgbe: not in enabled drivers build config 00:02:39.043 net/mana: not in enabled drivers build config 00:02:39.043 net/memif: not in enabled drivers build config 00:02:39.043 net/mlx4: not in enabled drivers build config 00:02:39.043 net/mlx5: not in enabled drivers build config 00:02:39.043 net/mvneta: not in enabled drivers build config 00:02:39.043 net/mvpp2: not in enabled drivers build config 00:02:39.043 net/netvsc: not in enabled drivers build config 00:02:39.043 net/nfb: not in enabled drivers build config 00:02:39.043 net/nfp: not in enabled drivers build config 00:02:39.043 net/ngbe: not in enabled drivers build config 00:02:39.043 net/null: not in enabled drivers build config 00:02:39.043 net/octeontx: not in enabled drivers build config 00:02:39.043 net/octeon_ep: not in enabled drivers build config 00:02:39.043 net/pcap: not in enabled drivers build config 00:02:39.043 net/pfe: not in enabled drivers build config 00:02:39.043 net/qede: not in enabled drivers build config 00:02:39.043 net/ring: not in enabled drivers build config 00:02:39.043 net/sfc: not in enabled drivers build config 00:02:39.043 net/softnic: not in enabled drivers build config 00:02:39.043 net/tap: not in enabled drivers build config 00:02:39.043 net/thunderx: not in enabled drivers build config 00:02:39.043 net/txgbe: not in enabled drivers build config 00:02:39.043 net/vdev_netvsc: not in enabled drivers build config 00:02:39.044 net/vhost: not in enabled drivers build config 00:02:39.044 net/virtio: not in enabled drivers build config 00:02:39.044 net/vmxnet3: not in enabled drivers build config 00:02:39.044 raw/cnxk_bphy: not in enabled drivers build config 00:02:39.044 raw/cnxk_gpio: not in enabled drivers build config 00:02:39.044 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:39.044 raw/ifpga: not in enabled drivers build config 00:02:39.044 raw/ntb: not in enabled drivers build config 00:02:39.044 raw/skeleton: not in enabled drivers build config 00:02:39.044 crypto/armv8: not in enabled drivers build config 00:02:39.044 crypto/bcmfs: not in enabled drivers build config 00:02:39.044 crypto/caam_jr: not in enabled drivers build config 00:02:39.044 crypto/ccp: not in enabled drivers build config 00:02:39.044 crypto/cnxk: not in enabled drivers build config 00:02:39.044 crypto/dpaa_sec: not in enabled drivers build config 00:02:39.044 crypto/dpaa2_sec: not in enabled drivers build config 00:02:39.044 crypto/ipsec_mb: not in enabled drivers build config 00:02:39.044 crypto/mlx5: not in enabled drivers build config 00:02:39.044 crypto/mvsam: not in enabled drivers build config 00:02:39.044 crypto/nitrox: not in enabled drivers build config 00:02:39.044 crypto/null: not in enabled drivers build config 00:02:39.044 crypto/octeontx: not in enabled drivers build config 00:02:39.044 crypto/openssl: not in enabled drivers build config 00:02:39.044 crypto/scheduler: not in enabled drivers build config 00:02:39.044 crypto/uadk: not in enabled drivers build config 00:02:39.044 crypto/virtio: not in enabled drivers build config 00:02:39.044 compress/isal: not in enabled drivers build config 00:02:39.044 compress/mlx5: not in enabled drivers build config 00:02:39.044 compress/octeontx: not in enabled drivers build config 00:02:39.044 compress/zlib: not in enabled drivers build config 00:02:39.044 regex/mlx5: not in enabled drivers build config 00:02:39.044 regex/cn9k: not in enabled drivers build config 00:02:39.044 ml/cnxk: not in enabled drivers build config 00:02:39.044 vdpa/ifc: not in enabled drivers build config 00:02:39.044 vdpa/mlx5: not in enabled drivers build config 00:02:39.044 vdpa/nfp: not in enabled drivers build config 00:02:39.044 vdpa/sfc: not in enabled drivers build config 00:02:39.044 event/cnxk: not in enabled drivers build config 00:02:39.044 event/dlb2: not in enabled drivers build config 00:02:39.044 event/dpaa: not in enabled drivers build config 00:02:39.044 event/dpaa2: not in enabled drivers build config 00:02:39.044 event/dsw: not in enabled drivers build config 00:02:39.044 event/opdl: not in enabled drivers build config 00:02:39.044 event/skeleton: not in enabled drivers build config 00:02:39.044 event/sw: not in enabled drivers build config 00:02:39.044 event/octeontx: not in enabled drivers build config 00:02:39.044 baseband/acc: not in enabled drivers build config 00:02:39.044 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:39.044 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:39.044 baseband/la12xx: not in enabled drivers build config 00:02:39.044 baseband/null: not in enabled drivers build config 00:02:39.044 baseband/turbo_sw: not in enabled drivers build config 00:02:39.044 gpu/cuda: not in enabled drivers build config 00:02:39.044 00:02:39.044 00:02:39.044 Build targets in project: 220 00:02:39.044 00:02:39.044 DPDK 23.11.0 00:02:39.044 00:02:39.044 User defined options 00:02:39.044 libdir : lib 00:02:39.044 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:39.044 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:39.044 c_link_args : 00:02:39.044 enable_docs : false 00:02:39.044 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:39.044 enable_kmods : false 00:02:39.044 machine : native 00:02:39.044 tests : false 00:02:39.044 00:02:39.044 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:39.044 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:39.044 13:16:44 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:39.044 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:39.044 [1/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:39.044 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:39.044 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:39.303 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:39.303 [5/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:39.303 [6/710] Linking static target lib/librte_kvargs.a 00:02:39.303 [7/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:39.303 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:39.303 [9/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:39.303 [10/710] Linking static target lib/librte_log.a 00:02:39.303 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.578 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:39.842 [13/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:39.842 [14/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:39.842 [15/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.842 [16/710] Linking target lib/librte_log.so.24.0 00:02:39.842 [17/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:39.842 [18/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:40.101 [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:40.101 [20/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:40.101 [21/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:40.101 [22/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:40.101 [23/710] Linking target lib/librte_kvargs.so.24.0 00:02:40.101 [24/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:40.360 [25/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:40.360 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:40.360 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:40.360 [28/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:40.360 [29/710] Linking static target lib/librte_telemetry.a 00:02:40.360 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:40.360 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:40.619 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:40.619 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:40.877 [34/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:40.877 [35/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.877 [36/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:40.877 [37/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:40.877 [38/710] Linking target lib/librte_telemetry.so.24.0 00:02:40.877 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:40.877 [40/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:40.877 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:40.877 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:40.877 [43/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:41.135 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:41.135 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:41.393 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:41.393 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:41.393 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:41.393 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:41.652 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:41.652 [51/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:41.652 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:41.652 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:41.652 [54/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:41.910 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:41.910 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:41.910 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:41.910 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:41.910 [59/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:42.182 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:42.182 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:42.182 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:42.182 [63/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:42.182 [64/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:42.455 [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:42.455 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:42.455 [67/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:42.455 [68/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:42.455 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:42.713 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:42.713 [71/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:42.713 [72/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:42.713 [73/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:42.713 [74/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:42.713 [75/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:42.713 [76/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:42.713 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:42.971 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:42.971 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:43.229 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:43.229 [81/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:43.229 [82/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:43.230 [83/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:43.488 [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:43.488 [85/710] Linking static target lib/librte_ring.a 00:02:43.488 [86/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:43.746 [87/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.746 [88/710] Linking static target lib/librte_eal.a 00:02:43.746 [89/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:43.746 [90/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:44.005 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:44.005 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:44.005 [93/710] Linking static target lib/librte_mempool.a 00:02:44.005 [94/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:44.005 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:44.005 [96/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:44.005 [97/710] Linking static target lib/librte_rcu.a 00:02:44.263 [98/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:44.263 [99/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:44.263 [100/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.522 [101/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:44.522 [102/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:44.522 [103/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.522 [104/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:44.522 [105/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:44.780 [106/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:44.780 [107/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:44.780 [108/710] Linking static target lib/librte_mbuf.a 00:02:44.780 [109/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:44.780 [110/710] Linking static target lib/librte_net.a 00:02:45.039 [111/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:45.039 [112/710] Linking static target lib/librte_meter.a 00:02:45.039 [113/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.297 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:45.297 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:45.297 [116/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.297 [117/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.297 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:45.297 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:45.862 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:45.862 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:46.121 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:46.379 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:46.379 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:46.379 [125/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:46.379 [126/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:46.379 [127/710] Linking static target lib/librte_pci.a 00:02:46.379 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:46.637 [129/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:46.637 [130/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:46.637 [131/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.637 [132/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:46.637 [133/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:46.637 [134/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:46.637 [135/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:46.896 [136/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:46.896 [137/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:46.896 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:46.896 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:46.896 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:46.896 [141/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:46.896 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:47.154 [143/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:47.154 [144/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:47.154 [145/710] Linking static target lib/librte_cmdline.a 00:02:47.412 [146/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:47.412 [147/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:47.412 [148/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:47.412 [149/710] Linking static target lib/librte_metrics.a 00:02:47.671 [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:47.929 [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.187 [152/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.187 [153/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:48.187 [154/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:48.187 [155/710] Linking static target lib/librte_timer.a 00:02:48.445 [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.703 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:48.703 [158/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:48.962 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:48.962 [160/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:49.528 [161/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:49.528 [162/710] Linking static target lib/librte_ethdev.a 00:02:49.528 [163/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:49.528 [164/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:49.528 [165/710] Linking static target lib/librte_bitratestats.a 00:02:49.786 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:49.786 [167/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.786 [168/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:49.786 [169/710] Linking static target lib/librte_bbdev.a 00:02:49.786 [170/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.786 [171/710] Linking target lib/librte_eal.so.24.0 00:02:49.786 [172/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:49.786 [173/710] Linking static target lib/librte_hash.a 00:02:50.044 [174/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:50.044 [175/710] Linking target lib/librte_ring.so.24.0 00:02:50.044 [176/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:50.044 [177/710] Linking target lib/librte_meter.so.24.0 00:02:50.044 [178/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:50.044 [179/710] Linking target lib/librte_rcu.so.24.0 00:02:50.303 [180/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:50.303 [181/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:50.303 [182/710] Linking target lib/librte_mempool.so.24.0 00:02:50.303 [183/710] Linking target lib/librte_pci.so.24.0 00:02:50.303 [184/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:50.303 [185/710] Linking target lib/librte_timer.so.24.0 00:02:50.303 [186/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:50.303 [187/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:50.303 [188/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:50.303 [189/710] Linking static target lib/acl/libavx2_tmp.a 00:02:50.303 [190/710] Linking static target lib/acl/libavx512_tmp.a 00:02:50.303 [191/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.303 [192/710] Linking target lib/librte_mbuf.so.24.0 00:02:50.561 [193/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.561 [194/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:50.561 [195/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:50.561 [196/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:50.561 [197/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:50.561 [198/710] Linking target lib/librte_net.so.24.0 00:02:50.561 [199/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:50.561 [200/710] Linking static target lib/librte_acl.a 00:02:50.820 [201/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:50.820 [202/710] Linking target lib/librte_cmdline.so.24.0 00:02:50.820 [203/710] Linking target lib/librte_hash.so.24.0 00:02:50.820 [204/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:50.820 [205/710] Linking target lib/librte_bbdev.so.24.0 00:02:50.820 [206/710] Linking static target lib/librte_cfgfile.a 00:02:51.090 [207/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:51.090 [208/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.090 [209/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:51.090 [210/710] Linking target lib/librte_acl.so.24.0 00:02:51.090 [211/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:51.090 [212/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:51.363 [213/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:51.363 [214/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.363 [215/710] Linking target lib/librte_cfgfile.so.24.0 00:02:51.363 [216/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:51.622 [217/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:51.622 [218/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:51.880 [219/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:51.880 [220/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:51.880 [221/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:51.880 [222/710] Linking static target lib/librte_bpf.a 00:02:51.880 [223/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:51.880 [224/710] Linking static target lib/librte_compressdev.a 00:02:52.138 [225/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.138 [226/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:52.138 [227/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:52.396 [228/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:52.396 [229/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:52.396 [230/710] Linking static target lib/librte_distributor.a 00:02:52.396 [231/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:52.396 [232/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.396 [233/710] Linking target lib/librte_compressdev.so.24.0 00:02:52.655 [234/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.655 [235/710] Linking target lib/librte_distributor.so.24.0 00:02:52.914 [236/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:52.914 [237/710] Linking static target lib/librte_dmadev.a 00:02:52.914 [238/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:53.172 [239/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.172 [240/710] Linking target lib/librte_dmadev.so.24.0 00:02:53.172 [241/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:53.430 [242/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:53.430 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:53.689 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:53.689 [245/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:53.689 [246/710] Linking static target lib/librte_efd.a 00:02:53.947 [247/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:53.947 [248/710] Linking static target lib/librte_cryptodev.a 00:02:53.947 [249/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.947 [250/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:53.947 [251/710] Linking target lib/librte_efd.so.24.0 00:02:54.205 [252/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.205 [253/710] Linking target lib/librte_ethdev.so.24.0 00:02:54.205 [254/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:54.205 [255/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:54.205 [256/710] Linking static target lib/librte_dispatcher.a 00:02:54.463 [257/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:54.463 [258/710] Linking target lib/librte_metrics.so.24.0 00:02:54.463 [259/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:54.463 [260/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:54.463 [261/710] Linking target lib/librte_bpf.so.24.0 00:02:54.463 [262/710] Linking target lib/librte_bitratestats.so.24.0 00:02:54.463 [263/710] Linking static target lib/librte_gpudev.a 00:02:54.722 [264/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:54.722 [265/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.722 [266/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:54.722 [267/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:54.980 [268/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:54.980 [269/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:54.980 [270/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.980 [271/710] Linking target lib/librte_cryptodev.so.24.0 00:02:55.239 [272/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:55.239 [273/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:55.239 [274/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.498 [275/710] Linking target lib/librte_gpudev.so.24.0 00:02:55.498 [276/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:55.498 [277/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:55.498 [278/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:55.498 [279/710] Linking static target lib/librte_eventdev.a 00:02:55.498 [280/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:55.498 [281/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:55.756 [282/710] Linking static target lib/librte_gro.a 00:02:55.756 [283/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:55.756 [284/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:55.756 [285/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.756 [286/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:55.756 [287/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:56.014 [288/710] Linking target lib/librte_gro.so.24.0 00:02:56.015 [289/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:56.015 [290/710] Linking static target lib/librte_gso.a 00:02:56.273 [291/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:56.273 [292/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.273 [293/710] Linking target lib/librte_gso.so.24.0 00:02:56.273 [294/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:56.532 [295/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:56.532 [296/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:56.532 [297/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:56.532 [298/710] Linking static target lib/librte_jobstats.a 00:02:56.532 [299/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:56.791 [300/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:56.791 [301/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:56.791 [302/710] Linking static target lib/librte_latencystats.a 00:02:56.791 [303/710] Linking static target lib/librte_ip_frag.a 00:02:56.791 [304/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.791 [305/710] Linking target lib/librte_jobstats.so.24.0 00:02:57.049 [306/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.049 [307/710] Linking target lib/librte_latencystats.so.24.0 00:02:57.049 [308/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.049 [309/710] Linking target lib/librte_ip_frag.so.24.0 00:02:57.049 [310/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:57.049 [311/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:57.049 [312/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:57.049 [313/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:57.049 [314/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:57.306 [315/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:57.306 [316/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:57.306 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:57.563 [318/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.563 [319/710] Linking target lib/librte_eventdev.so.24.0 00:02:57.563 [320/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:57.822 [321/710] Linking static target lib/librte_lpm.a 00:02:57.822 [322/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:57.822 [323/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:57.822 [324/710] Linking target lib/librte_dispatcher.so.24.0 00:02:57.822 [325/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:57.822 [326/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:58.081 [327/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:58.081 [328/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:58.081 [329/710] Linking static target lib/librte_pcapng.a 00:02:58.081 [330/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:58.081 [331/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.081 [332/710] Linking target lib/librte_lpm.so.24.0 00:02:58.081 [333/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:58.081 [334/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:58.339 [335/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.339 [336/710] Linking target lib/librte_pcapng.so.24.0 00:02:58.339 [337/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:58.339 [338/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:58.339 [339/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:58.598 [340/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:58.598 [341/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:58.857 [342/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:58.857 [343/710] Linking static target lib/librte_power.a 00:02:58.857 [344/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:58.857 [345/710] Linking static target lib/librte_rawdev.a 00:02:58.857 [346/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:58.857 [347/710] Linking static target lib/librte_regexdev.a 00:02:58.857 [348/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:58.857 [349/710] Linking static target lib/librte_member.a 00:02:59.115 [350/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:59.115 [351/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:59.115 [352/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:59.115 [353/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.115 [354/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:59.115 [355/710] Linking static target lib/librte_mldev.a 00:02:59.115 [356/710] Linking target lib/librte_member.so.24.0 00:02:59.374 [357/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.374 [358/710] Linking target lib/librte_rawdev.so.24.0 00:02:59.374 [359/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.374 [360/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:59.374 [361/710] Linking target lib/librte_power.so.24.0 00:02:59.374 [362/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:59.631 [363/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.631 [364/710] Linking target lib/librte_regexdev.so.24.0 00:02:59.631 [365/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:59.890 [366/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:59.890 [367/710] Linking static target lib/librte_rib.a 00:02:59.890 [368/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:59.890 [369/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:59.890 [370/710] Linking static target lib/librte_reorder.a 00:02:59.890 [371/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:59.890 [372/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:00.149 [373/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:00.149 [374/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:00.149 [375/710] Linking static target lib/librte_stack.a 00:03:00.149 [376/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:00.149 [377/710] Linking static target lib/librte_security.a 00:03:00.149 [378/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.149 [379/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.149 [380/710] Linking target lib/librte_reorder.so.24.0 00:03:00.409 [381/710] Linking target lib/librte_rib.so.24.0 00:03:00.409 [382/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.409 [383/710] Linking target lib/librte_stack.so.24.0 00:03:00.409 [384/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:03:00.409 [385/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:03:00.409 [386/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.409 [387/710] Linking target lib/librte_mldev.so.24.0 00:03:00.683 [388/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:00.683 [389/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.683 [390/710] Linking target lib/librte_security.so.24.0 00:03:00.683 [391/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:00.683 [392/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:03:00.956 [393/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:00.956 [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:00.956 [395/710] Linking static target lib/librte_sched.a 00:03:01.215 [396/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:01.215 [397/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.215 [398/710] Linking target lib/librte_sched.so.24.0 00:03:01.474 [399/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:01.474 [400/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:03:01.474 [401/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:01.733 [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:01.733 [403/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:01.991 [404/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:01.991 [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:02.250 [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:02.250 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:02.509 [408/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:02.509 [409/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:02.509 [410/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:02.509 [411/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:02.509 [412/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:02.509 [413/710] Linking static target lib/librte_ipsec.a 00:03:02.767 [414/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.026 [415/710] Linking target lib/librte_ipsec.so.24.0 00:03:03.026 [416/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:03:03.026 [417/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:03:03.026 [418/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:03.026 [419/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:03:03.026 [420/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:03:03.026 [421/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:03.026 [422/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:03:03.285 [423/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:03.853 [424/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:03.853 [425/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:03.853 [426/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:03.853 [427/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:03.853 [428/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:04.112 [429/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:04.112 [430/710] Linking static target lib/librte_pdcp.a 00:03:04.112 [431/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:04.112 [432/710] Linking static target lib/librte_fib.a 00:03:04.371 [433/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.371 [434/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.371 [435/710] Linking target lib/librte_pdcp.so.24.0 00:03:04.371 [436/710] Linking target lib/librte_fib.so.24.0 00:03:04.630 [437/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:04.889 [438/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:04.889 [439/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:04.889 [440/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:04.889 [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:05.148 [442/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:05.148 [443/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:05.148 [444/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:05.407 [445/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:05.666 [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:05.666 [447/710] Linking static target lib/librte_port.a 00:03:05.666 [448/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:05.666 [449/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:05.925 [450/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:05.925 [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:05.925 [452/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:05.925 [453/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:06.184 [454/710] Linking static target lib/librte_pdump.a 00:03:06.184 [455/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:06.184 [456/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.184 [457/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:06.184 [458/710] Linking target lib/librte_port.so.24.0 00:03:06.184 [459/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.184 [460/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:06.470 [461/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:06.470 [462/710] Linking target lib/librte_pdump.so.24.0 00:03:06.728 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:06.728 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:06.987 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:06.987 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:06.987 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:06.987 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:07.246 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:07.505 [470/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:07.505 [471/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:07.505 [472/710] Linking static target lib/librte_table.a 00:03:07.505 [473/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:08.073 [474/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:08.073 [475/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.073 [476/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:08.073 [477/710] Linking target lib/librte_table.so.24.0 00:03:08.331 [478/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:08.331 [479/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:08.331 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:08.589 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:08.848 [482/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:08.848 [483/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:08.848 [484/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:08.848 [485/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:09.107 [486/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:09.107 [487/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:09.366 [488/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:09.366 [489/710] Linking static target lib/librte_graph.a 00:03:09.366 [490/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:09.625 [491/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:09.625 [492/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:09.883 [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:09.883 [494/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.883 [495/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:09.883 [496/710] Linking target lib/librte_graph.so.24.0 00:03:10.142 [497/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:10.400 [498/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:10.400 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:10.659 [500/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:10.659 [501/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:10.659 [502/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:10.659 [503/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:10.917 [504/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:10.917 [505/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:10.917 [506/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:11.176 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:11.434 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:11.434 [509/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:11.434 [510/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:11.434 [511/710] Linking static target lib/librte_node.a 00:03:11.434 [512/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:11.434 [513/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:11.693 [514/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:11.693 [515/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:11.693 [516/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.693 [517/710] Linking target lib/librte_node.so.24.0 00:03:11.951 [518/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:11.951 [519/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:11.951 [520/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:11.951 [521/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:12.210 [522/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:12.210 [523/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:12.210 [524/710] Linking static target drivers/librte_bus_pci.a 00:03:12.210 [525/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:12.210 [526/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:12.210 [527/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:12.210 [528/710] Linking static target drivers/librte_bus_vdev.a 00:03:12.468 [529/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:12.468 [530/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:12.468 [531/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:12.468 [532/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:12.468 [533/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.468 [534/710] Linking target drivers/librte_bus_vdev.so.24.0 00:03:12.727 [535/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.727 [536/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:12.727 [537/710] Linking target drivers/librte_bus_pci.so.24.0 00:03:12.727 [538/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:12.727 [539/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:12.985 [540/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:12.985 [541/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:12.985 [542/710] Linking static target drivers/librte_mempool_ring.a 00:03:12.985 [543/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:12.985 [544/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:12.985 [545/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:12.985 [546/710] Linking target drivers/librte_mempool_ring.so.24.0 00:03:13.552 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:13.552 [548/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:13.552 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:13.811 [550/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:13.811 [551/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:14.745 [552/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:14.745 [553/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:14.745 [554/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:14.745 [555/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:14.745 [556/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:14.745 [557/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:14.745 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:15.025 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:15.326 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:15.584 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:15.584 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:15.842 [563/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:15.842 [564/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:16.101 [565/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:16.101 [566/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:16.666 [567/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:16.666 [568/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:16.666 [569/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:16.666 [570/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:16.666 [571/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:16.925 [572/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:16.925 [573/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:17.183 [574/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:17.183 [575/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:17.183 [576/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:17.183 [577/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:17.183 [578/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:17.442 [579/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:17.442 [580/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:17.701 [581/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:17.701 [582/710] Linking static target lib/librte_vhost.a 00:03:17.701 [583/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:17.701 [584/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:17.701 [585/710] Linking static target drivers/librte_net_i40e.a 00:03:17.701 [586/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:17.701 [587/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:17.959 [588/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:17.959 [589/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:17.959 [590/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:18.218 [591/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:18.218 [592/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:18.218 [593/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.476 [594/710] Linking target drivers/librte_net_i40e.so.24.0 00:03:18.476 [595/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:18.476 [596/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:18.735 [597/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:18.735 [598/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.994 [599/710] Linking target lib/librte_vhost.so.24.0 00:03:18.994 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:19.252 [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:19.252 [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:19.511 [603/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:19.511 [604/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:19.511 [605/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:19.511 [606/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:19.511 [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:20.077 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:20.077 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:20.335 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:20.335 [611/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:20.335 [612/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:20.335 [613/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:20.335 [614/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:20.335 [615/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:20.335 [616/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:20.594 [617/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:20.852 [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:20.852 [619/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:21.110 [620/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:21.110 [621/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:21.369 [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:21.369 [623/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:21.935 [624/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:22.194 [625/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:22.194 [626/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:22.454 [627/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:22.454 [628/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:22.454 [629/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:22.712 [630/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:22.712 [631/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:22.712 [632/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:22.712 [633/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:22.712 [634/710] Linking static target lib/librte_pipeline.a 00:03:22.712 [635/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:22.712 [636/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:22.972 [637/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:23.230 [638/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:23.230 [639/710] Linking target app/dpdk-dumpcap 00:03:23.230 [640/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:23.230 [641/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:23.230 [642/710] Linking target app/dpdk-graph 00:03:23.489 [643/710] Linking target app/dpdk-pdump 00:03:23.489 [644/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:23.749 [645/710] Linking target app/dpdk-proc-info 00:03:23.749 [646/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:23.749 [647/710] Linking target app/dpdk-test-acl 00:03:23.749 [648/710] Linking target app/dpdk-test-compress-perf 00:03:23.749 [649/710] Linking target app/dpdk-test-cmdline 00:03:23.749 [650/710] Linking target app/dpdk-test-dma-perf 00:03:23.749 [651/710] Linking target app/dpdk-test-crypto-perf 00:03:24.008 [652/710] Linking target app/dpdk-test-fib 00:03:24.266 [653/710] Linking target app/dpdk-test-eventdev 00:03:24.266 [654/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:24.266 [655/710] Linking target app/dpdk-test-flow-perf 00:03:24.267 [656/710] Linking target app/dpdk-test-gpudev 00:03:24.267 [657/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:24.525 [658/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:24.525 [659/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:24.525 [660/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:24.525 [661/710] Linking target app/dpdk-test-bbdev 00:03:24.784 [662/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:24.784 [663/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:24.784 [664/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:25.043 [665/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:25.302 [666/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:25.302 [667/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:25.302 [668/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:25.302 [669/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:25.302 [670/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:25.302 [671/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:25.560 [672/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.560 [673/710] Linking target lib/librte_pipeline.so.24.0 00:03:25.818 [674/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:25.818 [675/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:25.818 [676/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:26.076 [677/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:26.076 [678/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:26.334 [679/710] Linking target app/dpdk-test-pipeline 00:03:26.334 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:26.334 [681/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:26.592 [682/710] Linking target app/dpdk-test-mldev 00:03:26.592 [683/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:27.158 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:27.158 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:27.158 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:27.158 [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:27.416 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:27.674 [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:27.674 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:27.674 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:27.674 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:27.931 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:28.189 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:28.448 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:28.706 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:28.706 [697/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:28.706 [698/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:28.964 [699/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:28.964 [700/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:29.222 [701/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:29.222 [702/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:29.222 [703/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:29.547 [704/710] Linking target app/dpdk-test-sad 00:03:29.547 [705/710] Linking target app/dpdk-test-regex 00:03:29.547 [706/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:29.547 [707/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:29.825 [708/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:30.082 [709/710] Linking target app/dpdk-testpmd 00:03:30.340 [710/710] Linking target app/dpdk-test-security-perf 00:03:30.340 13:17:35 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:30.340 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:30.340 [0/1] Installing files. 00:03:30.600 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:30.600 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:30.600 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:30.600 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:30.600 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:30.600 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:30.600 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:30.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.604 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.604 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.604 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.604 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.604 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.604 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.863 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.863 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.863 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.863 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.863 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.863 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.863 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.863 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.863 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.863 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.863 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.863 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.863 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.863 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.863 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.863 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.863 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.863 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:30.864 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:30.865 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:30.865 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:30.865 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:30.865 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:30.865 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:30.865 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:30.865 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:30.865 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.865 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:31.125 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:31.125 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:31.125 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:31.125 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:31.125 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:31.125 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:31.125 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:31.125 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:31.125 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:31.125 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:31.125 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:31.125 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:31.125 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:31.125 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:31.125 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:31.125 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:31.125 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:31.125 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:31.125 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:31.125 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:31.126 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:31.126 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:31.126 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:31.126 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:31.126 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:31.126 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:31.126 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:31.126 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:31.126 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:31.126 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.126 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.127 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.128 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:31.388 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:31.388 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:31.388 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:31.388 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:31.388 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:31.388 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:31.388 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:31.388 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:31.388 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:31.388 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:31.389 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:31.389 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:31.389 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:31.389 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:31.389 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:31.389 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:31.389 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:31.389 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:31.389 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:31.389 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:31.389 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:31.389 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:31.389 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:31.389 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:31.389 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:31.389 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:31.389 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:31.389 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:31.389 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:31.389 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:31.389 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:31.389 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:31.389 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:31.389 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:31.389 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:31.389 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:31.389 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:31.389 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:31.389 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:31.389 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:31.389 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:31.389 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:31.389 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:31.389 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:31.389 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:31.389 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:31.389 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:31.389 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:31.389 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:31.389 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:31.389 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:31.389 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:31.389 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:31.389 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:31.389 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:31.389 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:31.389 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:31.389 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:31.389 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:31.389 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:31.389 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:31.389 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:31.389 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:31.389 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:31.389 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:31.389 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:31.389 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:31.389 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:31.389 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:31.389 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:31.389 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:31.389 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:31.389 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:31.389 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:31.389 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:31.389 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:31.389 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:31.389 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:31.389 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:31.389 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:31.389 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:31.389 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:31.389 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:31.389 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:31.389 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:31.389 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:31.389 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:31.389 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:31.389 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:31.389 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:31.389 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:31.389 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:31.389 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:31.389 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:31.389 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:31.389 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:31.389 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:31.389 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:31.389 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:31.389 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:31.389 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:31.389 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:31.389 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:31.389 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:31.389 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:31.389 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:31.389 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:31.389 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:31.389 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:31.389 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:31.389 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:31.389 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:31.389 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:31.389 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:31.389 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:31.389 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:31.389 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:31.389 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:31.389 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:31.389 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:31.389 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:31.389 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:31.389 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:31.390 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:31.390 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:31.390 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:31.390 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:31.390 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:31.390 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:31.390 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:31.390 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:31.390 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:31.390 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:31.390 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:31.390 13:17:36 -- common/autobuild_common.sh@192 -- $ uname -s 00:03:31.390 13:17:36 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:31.390 13:17:36 -- common/autobuild_common.sh@203 -- $ cat 00:03:31.390 13:17:36 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:31.390 00:03:31.390 real 0m59.570s 00:03:31.390 user 7m9.636s 00:03:31.390 sys 1m11.451s 00:03:31.390 13:17:36 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:31.390 ************************************ 00:03:31.390 END TEST build_native_dpdk 00:03:31.390 ************************************ 00:03:31.390 13:17:36 -- common/autotest_common.sh@10 -- $ set +x 00:03:31.390 13:17:36 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:31.390 13:17:36 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:31.390 13:17:36 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:31.390 13:17:36 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:31.390 13:17:36 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:31.390 13:17:36 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:31.390 13:17:36 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:31.390 13:17:36 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:03:31.648 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:31.648 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:31.648 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:31.648 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:32.214 Using 'verbs' RDMA provider 00:03:44.979 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:59.856 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:59.856 go version go1.21.1 linux/amd64 00:03:59.856 Creating mk/config.mk...done. 00:03:59.856 Creating mk/cc.flags.mk...done. 00:03:59.856 Type 'make' to build. 00:03:59.856 13:18:03 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:59.856 13:18:03 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:03:59.856 13:18:03 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:03:59.856 13:18:03 -- common/autotest_common.sh@10 -- $ set +x 00:03:59.856 ************************************ 00:03:59.856 START TEST make 00:03:59.856 ************************************ 00:03:59.857 13:18:03 -- common/autotest_common.sh@1114 -- $ make -j10 00:03:59.857 make[1]: Nothing to be done for 'all'. 00:04:21.782 CC lib/log/log.o 00:04:21.782 CC lib/log/log_flags.o 00:04:21.782 CC lib/log/log_deprecated.o 00:04:21.782 CC lib/ut_mock/mock.o 00:04:21.782 CC lib/ut/ut.o 00:04:21.782 LIB libspdk_ut_mock.a 00:04:21.782 LIB libspdk_ut.a 00:04:21.782 LIB libspdk_log.a 00:04:21.782 SO libspdk_ut_mock.so.5.0 00:04:21.782 SO libspdk_ut.so.1.0 00:04:21.782 SO libspdk_log.so.6.1 00:04:21.782 SYMLINK libspdk_ut_mock.so 00:04:21.782 SYMLINK libspdk_ut.so 00:04:21.782 SYMLINK libspdk_log.so 00:04:21.782 CC lib/ioat/ioat.o 00:04:21.782 CC lib/util/bit_array.o 00:04:21.782 CC lib/util/base64.o 00:04:21.782 CC lib/util/cpuset.o 00:04:21.782 CC lib/util/crc16.o 00:04:21.782 CC lib/util/crc32.o 00:04:21.782 CC lib/util/crc32c.o 00:04:21.782 CC lib/dma/dma.o 00:04:21.782 CXX lib/trace_parser/trace.o 00:04:21.782 CC lib/vfio_user/host/vfio_user_pci.o 00:04:22.040 CC lib/util/crc32_ieee.o 00:04:22.040 CC lib/vfio_user/host/vfio_user.o 00:04:22.040 CC lib/util/crc64.o 00:04:22.040 CC lib/util/dif.o 00:04:22.040 CC lib/util/fd.o 00:04:22.040 LIB libspdk_dma.a 00:04:22.040 CC lib/util/file.o 00:04:22.040 SO libspdk_dma.so.3.0 00:04:22.040 LIB libspdk_ioat.a 00:04:22.040 CC lib/util/hexlify.o 00:04:22.040 SYMLINK libspdk_dma.so 00:04:22.040 CC lib/util/iov.o 00:04:22.040 CC lib/util/math.o 00:04:22.040 CC lib/util/pipe.o 00:04:22.040 SO libspdk_ioat.so.6.0 00:04:22.040 CC lib/util/strerror_tls.o 00:04:22.040 LIB libspdk_vfio_user.a 00:04:22.040 CC lib/util/string.o 00:04:22.040 SYMLINK libspdk_ioat.so 00:04:22.040 CC lib/util/uuid.o 00:04:22.299 SO libspdk_vfio_user.so.4.0 00:04:22.299 CC lib/util/fd_group.o 00:04:22.299 CC lib/util/xor.o 00:04:22.299 SYMLINK libspdk_vfio_user.so 00:04:22.299 CC lib/util/zipf.o 00:04:22.557 LIB libspdk_util.a 00:04:22.557 SO libspdk_util.so.8.0 00:04:22.815 SYMLINK libspdk_util.so 00:04:22.815 LIB libspdk_trace_parser.a 00:04:22.815 SO libspdk_trace_parser.so.4.0 00:04:22.815 CC lib/json/json_parse.o 00:04:22.815 CC lib/conf/conf.o 00:04:22.815 CC lib/json/json_util.o 00:04:22.815 CC lib/json/json_write.o 00:04:22.815 CC lib/env_dpdk/env.o 00:04:22.815 CC lib/rdma/rdma_verbs.o 00:04:22.815 CC lib/rdma/common.o 00:04:22.815 CC lib/vmd/vmd.o 00:04:22.815 CC lib/idxd/idxd.o 00:04:22.815 SYMLINK libspdk_trace_parser.so 00:04:22.815 CC lib/vmd/led.o 00:04:23.073 CC lib/env_dpdk/memory.o 00:04:23.073 CC lib/idxd/idxd_user.o 00:04:23.073 LIB libspdk_conf.a 00:04:23.073 CC lib/idxd/idxd_kernel.o 00:04:23.073 CC lib/env_dpdk/pci.o 00:04:23.073 SO libspdk_conf.so.5.0 00:04:23.073 LIB libspdk_json.a 00:04:23.073 LIB libspdk_rdma.a 00:04:23.073 SYMLINK libspdk_conf.so 00:04:23.073 SO libspdk_rdma.so.5.0 00:04:23.073 CC lib/env_dpdk/init.o 00:04:23.073 SO libspdk_json.so.5.1 00:04:23.331 SYMLINK libspdk_rdma.so 00:04:23.331 CC lib/env_dpdk/threads.o 00:04:23.331 SYMLINK libspdk_json.so 00:04:23.331 CC lib/env_dpdk/pci_ioat.o 00:04:23.331 CC lib/env_dpdk/pci_virtio.o 00:04:23.331 CC lib/env_dpdk/pci_vmd.o 00:04:23.331 CC lib/env_dpdk/pci_idxd.o 00:04:23.331 CC lib/env_dpdk/pci_event.o 00:04:23.331 CC lib/env_dpdk/sigbus_handler.o 00:04:23.331 LIB libspdk_idxd.a 00:04:23.331 CC lib/env_dpdk/pci_dpdk.o 00:04:23.331 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:23.331 SO libspdk_idxd.so.11.0 00:04:23.590 LIB libspdk_vmd.a 00:04:23.590 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:23.590 SYMLINK libspdk_idxd.so 00:04:23.590 SO libspdk_vmd.so.5.0 00:04:23.590 SYMLINK libspdk_vmd.so 00:04:23.590 CC lib/jsonrpc/jsonrpc_server.o 00:04:23.590 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:23.590 CC lib/jsonrpc/jsonrpc_client.o 00:04:23.590 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:23.848 LIB libspdk_jsonrpc.a 00:04:23.848 SO libspdk_jsonrpc.so.5.1 00:04:24.106 SYMLINK libspdk_jsonrpc.so 00:04:24.106 CC lib/rpc/rpc.o 00:04:24.106 LIB libspdk_env_dpdk.a 00:04:24.364 SO libspdk_env_dpdk.so.13.0 00:04:24.364 LIB libspdk_rpc.a 00:04:24.364 SO libspdk_rpc.so.5.0 00:04:24.364 SYMLINK libspdk_env_dpdk.so 00:04:24.364 SYMLINK libspdk_rpc.so 00:04:24.622 CC lib/notify/notify.o 00:04:24.622 CC lib/notify/notify_rpc.o 00:04:24.622 CC lib/trace/trace.o 00:04:24.622 CC lib/trace/trace_flags.o 00:04:24.622 CC lib/trace/trace_rpc.o 00:04:24.622 CC lib/sock/sock.o 00:04:24.622 CC lib/sock/sock_rpc.o 00:04:24.622 LIB libspdk_notify.a 00:04:24.622 SO libspdk_notify.so.5.0 00:04:24.880 SYMLINK libspdk_notify.so 00:04:24.880 LIB libspdk_trace.a 00:04:24.880 SO libspdk_trace.so.9.0 00:04:24.880 SYMLINK libspdk_trace.so 00:04:24.880 LIB libspdk_sock.a 00:04:25.138 SO libspdk_sock.so.8.0 00:04:25.138 CC lib/thread/iobuf.o 00:04:25.138 CC lib/thread/thread.o 00:04:25.138 SYMLINK libspdk_sock.so 00:04:25.397 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:25.397 CC lib/nvme/nvme_ctrlr.o 00:04:25.397 CC lib/nvme/nvme_fabric.o 00:04:25.397 CC lib/nvme/nvme_ns_cmd.o 00:04:25.397 CC lib/nvme/nvme_ns.o 00:04:25.397 CC lib/nvme/nvme_pcie_common.o 00:04:25.397 CC lib/nvme/nvme_pcie.o 00:04:25.397 CC lib/nvme/nvme_qpair.o 00:04:25.397 CC lib/nvme/nvme.o 00:04:25.963 CC lib/nvme/nvme_quirks.o 00:04:25.963 CC lib/nvme/nvme_transport.o 00:04:26.222 CC lib/nvme/nvme_discovery.o 00:04:26.222 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:26.222 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:26.222 CC lib/nvme/nvme_tcp.o 00:04:26.222 CC lib/nvme/nvme_opal.o 00:04:26.480 CC lib/nvme/nvme_io_msg.o 00:04:26.480 LIB libspdk_thread.a 00:04:26.480 CC lib/nvme/nvme_poll_group.o 00:04:26.480 SO libspdk_thread.so.9.0 00:04:26.738 SYMLINK libspdk_thread.so 00:04:26.738 CC lib/nvme/nvme_zns.o 00:04:26.738 CC lib/nvme/nvme_cuse.o 00:04:26.738 CC lib/nvme/nvme_vfio_user.o 00:04:26.738 CC lib/nvme/nvme_rdma.o 00:04:26.997 CC lib/accel/accel.o 00:04:26.997 CC lib/blob/blobstore.o 00:04:26.997 CC lib/blob/request.o 00:04:27.255 CC lib/blob/zeroes.o 00:04:27.255 CC lib/accel/accel_rpc.o 00:04:27.255 CC lib/accel/accel_sw.o 00:04:27.255 CC lib/blob/blob_bs_dev.o 00:04:27.513 CC lib/init/json_config.o 00:04:27.513 CC lib/init/subsystem.o 00:04:27.513 CC lib/init/subsystem_rpc.o 00:04:27.514 CC lib/virtio/virtio.o 00:04:27.514 CC lib/virtio/virtio_vhost_user.o 00:04:27.772 CC lib/init/rpc.o 00:04:27.772 CC lib/virtio/virtio_vfio_user.o 00:04:27.772 CC lib/virtio/virtio_pci.o 00:04:27.772 LIB libspdk_init.a 00:04:27.772 LIB libspdk_accel.a 00:04:27.772 SO libspdk_init.so.4.0 00:04:28.030 SO libspdk_accel.so.14.0 00:04:28.030 SYMLINK libspdk_init.so 00:04:28.030 SYMLINK libspdk_accel.so 00:04:28.030 LIB libspdk_virtio.a 00:04:28.030 SO libspdk_virtio.so.6.0 00:04:28.030 CC lib/event/app.o 00:04:28.030 CC lib/event/log_rpc.o 00:04:28.030 CC lib/event/reactor.o 00:04:28.030 CC lib/event/app_rpc.o 00:04:28.030 CC lib/event/scheduler_static.o 00:04:28.030 CC lib/bdev/bdev.o 00:04:28.030 CC lib/bdev/bdev_rpc.o 00:04:28.030 SYMLINK libspdk_virtio.so 00:04:28.030 CC lib/bdev/bdev_zone.o 00:04:28.030 LIB libspdk_nvme.a 00:04:28.289 CC lib/bdev/part.o 00:04:28.289 CC lib/bdev/scsi_nvme.o 00:04:28.289 SO libspdk_nvme.so.12.0 00:04:28.547 LIB libspdk_event.a 00:04:28.547 SO libspdk_event.so.12.0 00:04:28.547 SYMLINK libspdk_nvme.so 00:04:28.547 SYMLINK libspdk_event.so 00:04:29.485 LIB libspdk_blob.a 00:04:29.485 SO libspdk_blob.so.10.1 00:04:29.743 SYMLINK libspdk_blob.so 00:04:29.743 CC lib/lvol/lvol.o 00:04:29.743 CC lib/blobfs/blobfs.o 00:04:29.743 CC lib/blobfs/tree.o 00:04:30.309 LIB libspdk_bdev.a 00:04:30.567 SO libspdk_bdev.so.14.0 00:04:30.567 SYMLINK libspdk_bdev.so 00:04:30.567 LIB libspdk_blobfs.a 00:04:30.567 SO libspdk_blobfs.so.9.0 00:04:30.567 CC lib/scsi/dev.o 00:04:30.567 CC lib/nbd/nbd.o 00:04:30.567 CC lib/nbd/nbd_rpc.o 00:04:30.567 CC lib/ublk/ublk.o 00:04:30.567 CC lib/ftl/ftl_core.o 00:04:30.567 CC lib/ublk/ublk_rpc.o 00:04:30.567 CC lib/scsi/lun.o 00:04:30.567 CC lib/nvmf/ctrlr.o 00:04:30.567 LIB libspdk_lvol.a 00:04:30.567 SYMLINK libspdk_blobfs.so 00:04:30.567 CC lib/nvmf/ctrlr_discovery.o 00:04:30.825 SO libspdk_lvol.so.9.1 00:04:30.825 SYMLINK libspdk_lvol.so 00:04:30.825 CC lib/ftl/ftl_init.o 00:04:30.825 CC lib/nvmf/ctrlr_bdev.o 00:04:30.825 CC lib/nvmf/subsystem.o 00:04:30.825 CC lib/ftl/ftl_layout.o 00:04:31.084 CC lib/scsi/port.o 00:04:31.084 CC lib/ftl/ftl_debug.o 00:04:31.084 CC lib/scsi/scsi.o 00:04:31.084 LIB libspdk_nbd.a 00:04:31.084 SO libspdk_nbd.so.6.0 00:04:31.084 CC lib/nvmf/nvmf.o 00:04:31.084 SYMLINK libspdk_nbd.so 00:04:31.084 CC lib/ftl/ftl_io.o 00:04:31.342 CC lib/nvmf/nvmf_rpc.o 00:04:31.342 LIB libspdk_ublk.a 00:04:31.342 CC lib/nvmf/transport.o 00:04:31.342 CC lib/ftl/ftl_sb.o 00:04:31.342 CC lib/scsi/scsi_bdev.o 00:04:31.342 SO libspdk_ublk.so.2.0 00:04:31.342 SYMLINK libspdk_ublk.so 00:04:31.342 CC lib/scsi/scsi_pr.o 00:04:31.342 CC lib/scsi/scsi_rpc.o 00:04:31.601 CC lib/ftl/ftl_l2p.o 00:04:31.601 CC lib/ftl/ftl_l2p_flat.o 00:04:31.601 CC lib/scsi/task.o 00:04:31.601 CC lib/nvmf/tcp.o 00:04:31.859 CC lib/ftl/ftl_nv_cache.o 00:04:31.859 CC lib/nvmf/rdma.o 00:04:31.859 CC lib/ftl/ftl_band.o 00:04:31.859 LIB libspdk_scsi.a 00:04:31.859 SO libspdk_scsi.so.8.0 00:04:31.859 CC lib/ftl/ftl_band_ops.o 00:04:31.859 CC lib/ftl/ftl_writer.o 00:04:31.859 SYMLINK libspdk_scsi.so 00:04:31.859 CC lib/ftl/ftl_rq.o 00:04:32.118 CC lib/ftl/ftl_reloc.o 00:04:32.118 CC lib/iscsi/conn.o 00:04:32.118 CC lib/ftl/ftl_l2p_cache.o 00:04:32.118 CC lib/vhost/vhost.o 00:04:32.118 CC lib/ftl/ftl_p2l.o 00:04:32.118 CC lib/ftl/mngt/ftl_mngt.o 00:04:32.376 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:32.376 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:32.376 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:32.635 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:32.635 CC lib/vhost/vhost_rpc.o 00:04:32.635 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:32.635 CC lib/iscsi/init_grp.o 00:04:32.635 CC lib/vhost/vhost_scsi.o 00:04:32.635 CC lib/vhost/vhost_blk.o 00:04:32.635 CC lib/vhost/rte_vhost_user.o 00:04:32.893 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:32.893 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:32.893 CC lib/iscsi/iscsi.o 00:04:32.893 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:32.893 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:33.152 CC lib/iscsi/md5.o 00:04:33.152 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:33.152 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:33.152 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:33.152 CC lib/ftl/utils/ftl_conf.o 00:04:33.411 CC lib/iscsi/param.o 00:04:33.411 CC lib/ftl/utils/ftl_md.o 00:04:33.411 CC lib/ftl/utils/ftl_mempool.o 00:04:33.411 CC lib/ftl/utils/ftl_bitmap.o 00:04:33.670 CC lib/ftl/utils/ftl_property.o 00:04:33.670 CC lib/iscsi/portal_grp.o 00:04:33.670 CC lib/iscsi/tgt_node.o 00:04:33.670 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:33.670 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:33.670 CC lib/iscsi/iscsi_subsystem.o 00:04:33.670 LIB libspdk_nvmf.a 00:04:33.670 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:33.670 LIB libspdk_vhost.a 00:04:33.933 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:33.933 SO libspdk_nvmf.so.17.0 00:04:33.933 SO libspdk_vhost.so.7.1 00:04:33.933 CC lib/iscsi/iscsi_rpc.o 00:04:33.933 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:33.933 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:33.933 CC lib/iscsi/task.o 00:04:33.933 SYMLINK libspdk_vhost.so 00:04:33.933 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:33.933 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:33.933 SYMLINK libspdk_nvmf.so 00:04:33.933 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:33.933 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:34.211 CC lib/ftl/base/ftl_base_dev.o 00:04:34.211 CC lib/ftl/base/ftl_base_bdev.o 00:04:34.211 CC lib/ftl/ftl_trace.o 00:04:34.211 LIB libspdk_iscsi.a 00:04:34.490 SO libspdk_iscsi.so.7.0 00:04:34.490 LIB libspdk_ftl.a 00:04:34.490 SYMLINK libspdk_iscsi.so 00:04:34.490 SO libspdk_ftl.so.8.0 00:04:34.753 SYMLINK libspdk_ftl.so 00:04:35.011 CC module/env_dpdk/env_dpdk_rpc.o 00:04:35.012 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:35.012 CC module/accel/iaa/accel_iaa.o 00:04:35.012 CC module/accel/dsa/accel_dsa.o 00:04:35.012 CC module/scheduler/gscheduler/gscheduler.o 00:04:35.012 CC module/sock/posix/posix.o 00:04:35.012 CC module/accel/error/accel_error.o 00:04:35.012 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:35.012 CC module/accel/ioat/accel_ioat.o 00:04:35.012 CC module/blob/bdev/blob_bdev.o 00:04:35.012 LIB libspdk_env_dpdk_rpc.a 00:04:35.270 SO libspdk_env_dpdk_rpc.so.5.0 00:04:35.270 LIB libspdk_scheduler_gscheduler.a 00:04:35.270 LIB libspdk_scheduler_dpdk_governor.a 00:04:35.270 SO libspdk_scheduler_gscheduler.so.3.0 00:04:35.270 SYMLINK libspdk_env_dpdk_rpc.so 00:04:35.270 CC module/accel/ioat/accel_ioat_rpc.o 00:04:35.270 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:35.270 CC module/accel/error/accel_error_rpc.o 00:04:35.270 LIB libspdk_scheduler_dynamic.a 00:04:35.270 SYMLINK libspdk_scheduler_gscheduler.so 00:04:35.270 CC module/accel/iaa/accel_iaa_rpc.o 00:04:35.270 CC module/accel/dsa/accel_dsa_rpc.o 00:04:35.270 SO libspdk_scheduler_dynamic.so.3.0 00:04:35.270 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:35.270 SYMLINK libspdk_scheduler_dynamic.so 00:04:35.270 LIB libspdk_blob_bdev.a 00:04:35.270 SO libspdk_blob_bdev.so.10.1 00:04:35.270 LIB libspdk_accel_ioat.a 00:04:35.270 LIB libspdk_accel_error.a 00:04:35.529 LIB libspdk_accel_dsa.a 00:04:35.529 SO libspdk_accel_ioat.so.5.0 00:04:35.529 LIB libspdk_accel_iaa.a 00:04:35.529 SYMLINK libspdk_blob_bdev.so 00:04:35.529 SO libspdk_accel_error.so.1.0 00:04:35.529 SO libspdk_accel_dsa.so.4.0 00:04:35.529 SO libspdk_accel_iaa.so.2.0 00:04:35.529 SYMLINK libspdk_accel_ioat.so 00:04:35.529 SYMLINK libspdk_accel_dsa.so 00:04:35.529 SYMLINK libspdk_accel_error.so 00:04:35.529 SYMLINK libspdk_accel_iaa.so 00:04:35.529 CC module/bdev/null/bdev_null.o 00:04:35.529 CC module/bdev/error/vbdev_error.o 00:04:35.529 CC module/bdev/gpt/gpt.o 00:04:35.529 CC module/bdev/lvol/vbdev_lvol.o 00:04:35.529 CC module/bdev/nvme/bdev_nvme.o 00:04:35.529 CC module/bdev/delay/vbdev_delay.o 00:04:35.529 CC module/bdev/malloc/bdev_malloc.o 00:04:35.529 CC module/blobfs/bdev/blobfs_bdev.o 00:04:35.529 CC module/bdev/passthru/vbdev_passthru.o 00:04:35.787 LIB libspdk_sock_posix.a 00:04:35.787 SO libspdk_sock_posix.so.5.0 00:04:35.787 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:35.787 CC module/bdev/gpt/vbdev_gpt.o 00:04:35.787 CC module/bdev/error/vbdev_error_rpc.o 00:04:35.787 CC module/bdev/null/bdev_null_rpc.o 00:04:35.787 SYMLINK libspdk_sock_posix.so 00:04:35.787 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:35.787 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:36.046 LIB libspdk_blobfs_bdev.a 00:04:36.046 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:36.046 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:36.046 LIB libspdk_bdev_error.a 00:04:36.046 SO libspdk_blobfs_bdev.so.5.0 00:04:36.046 SO libspdk_bdev_error.so.5.0 00:04:36.046 LIB libspdk_bdev_null.a 00:04:36.046 LIB libspdk_bdev_malloc.a 00:04:36.046 SYMLINK libspdk_bdev_error.so 00:04:36.046 SYMLINK libspdk_blobfs_bdev.so 00:04:36.046 LIB libspdk_bdev_passthru.a 00:04:36.046 LIB libspdk_bdev_gpt.a 00:04:36.046 SO libspdk_bdev_null.so.5.0 00:04:36.046 SO libspdk_bdev_malloc.so.5.0 00:04:36.046 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:36.046 SO libspdk_bdev_passthru.so.5.0 00:04:36.046 SO libspdk_bdev_gpt.so.5.0 00:04:36.046 LIB libspdk_bdev_delay.a 00:04:36.046 SYMLINK libspdk_bdev_malloc.so 00:04:36.046 SYMLINK libspdk_bdev_null.so 00:04:36.046 CC module/bdev/nvme/nvme_rpc.o 00:04:36.304 CC module/bdev/raid/bdev_raid.o 00:04:36.304 CC module/bdev/split/vbdev_split.o 00:04:36.304 SO libspdk_bdev_delay.so.5.0 00:04:36.304 SYMLINK libspdk_bdev_gpt.so 00:04:36.304 SYMLINK libspdk_bdev_passthru.so 00:04:36.304 CC module/bdev/split/vbdev_split_rpc.o 00:04:36.304 SYMLINK libspdk_bdev_delay.so 00:04:36.304 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:36.304 CC module/bdev/aio/bdev_aio.o 00:04:36.304 CC module/bdev/ftl/bdev_ftl.o 00:04:36.304 CC module/bdev/nvme/bdev_mdns_client.o 00:04:36.304 LIB libspdk_bdev_lvol.a 00:04:36.304 LIB libspdk_bdev_split.a 00:04:36.563 SO libspdk_bdev_lvol.so.5.0 00:04:36.563 SO libspdk_bdev_split.so.5.0 00:04:36.563 CC module/bdev/iscsi/bdev_iscsi.o 00:04:36.563 SYMLINK libspdk_bdev_lvol.so 00:04:36.563 SYMLINK libspdk_bdev_split.so 00:04:36.563 CC module/bdev/nvme/vbdev_opal.o 00:04:36.563 CC module/bdev/aio/bdev_aio_rpc.o 00:04:36.563 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:36.563 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:36.563 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:36.563 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:36.821 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:36.821 LIB libspdk_bdev_aio.a 00:04:36.821 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:36.821 LIB libspdk_bdev_zone_block.a 00:04:36.821 SO libspdk_bdev_aio.so.5.0 00:04:36.821 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:36.821 SO libspdk_bdev_zone_block.so.5.0 00:04:36.821 LIB libspdk_bdev_iscsi.a 00:04:36.821 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:36.821 SYMLINK libspdk_bdev_aio.so 00:04:36.821 CC module/bdev/raid/bdev_raid_rpc.o 00:04:36.821 SO libspdk_bdev_iscsi.so.5.0 00:04:36.821 LIB libspdk_bdev_ftl.a 00:04:36.821 SYMLINK libspdk_bdev_zone_block.so 00:04:36.821 CC module/bdev/raid/bdev_raid_sb.o 00:04:36.821 SO libspdk_bdev_ftl.so.5.0 00:04:37.079 SYMLINK libspdk_bdev_iscsi.so 00:04:37.079 CC module/bdev/raid/raid0.o 00:04:37.079 SYMLINK libspdk_bdev_ftl.so 00:04:37.079 CC module/bdev/raid/raid1.o 00:04:37.079 CC module/bdev/raid/concat.o 00:04:37.337 LIB libspdk_bdev_raid.a 00:04:37.337 LIB libspdk_bdev_virtio.a 00:04:37.337 SO libspdk_bdev_raid.so.5.0 00:04:37.337 SO libspdk_bdev_virtio.so.5.0 00:04:37.337 SYMLINK libspdk_bdev_virtio.so 00:04:37.337 SYMLINK libspdk_bdev_raid.so 00:04:37.595 LIB libspdk_bdev_nvme.a 00:04:37.854 SO libspdk_bdev_nvme.so.6.0 00:04:37.854 SYMLINK libspdk_bdev_nvme.so 00:04:38.112 CC module/event/subsystems/vmd/vmd.o 00:04:38.112 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:38.112 CC module/event/subsystems/iobuf/iobuf.o 00:04:38.112 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:38.112 CC module/event/subsystems/scheduler/scheduler.o 00:04:38.112 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:38.112 CC module/event/subsystems/sock/sock.o 00:04:38.370 LIB libspdk_event_vhost_blk.a 00:04:38.370 LIB libspdk_event_scheduler.a 00:04:38.370 LIB libspdk_event_vmd.a 00:04:38.370 LIB libspdk_event_iobuf.a 00:04:38.370 SO libspdk_event_vhost_blk.so.2.0 00:04:38.370 SO libspdk_event_scheduler.so.3.0 00:04:38.370 LIB libspdk_event_sock.a 00:04:38.370 SO libspdk_event_vmd.so.5.0 00:04:38.370 SO libspdk_event_iobuf.so.2.0 00:04:38.370 SO libspdk_event_sock.so.4.0 00:04:38.370 SYMLINK libspdk_event_vhost_blk.so 00:04:38.370 SYMLINK libspdk_event_scheduler.so 00:04:38.370 SYMLINK libspdk_event_vmd.so 00:04:38.370 SYMLINK libspdk_event_iobuf.so 00:04:38.370 SYMLINK libspdk_event_sock.so 00:04:38.629 CC module/event/subsystems/accel/accel.o 00:04:38.629 LIB libspdk_event_accel.a 00:04:38.629 SO libspdk_event_accel.so.5.0 00:04:38.887 SYMLINK libspdk_event_accel.so 00:04:38.887 CC module/event/subsystems/bdev/bdev.o 00:04:39.146 LIB libspdk_event_bdev.a 00:04:39.146 SO libspdk_event_bdev.so.5.0 00:04:39.404 SYMLINK libspdk_event_bdev.so 00:04:39.404 CC module/event/subsystems/scsi/scsi.o 00:04:39.404 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:39.404 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:39.404 CC module/event/subsystems/nbd/nbd.o 00:04:39.404 CC module/event/subsystems/ublk/ublk.o 00:04:39.661 LIB libspdk_event_scsi.a 00:04:39.661 LIB libspdk_event_ublk.a 00:04:39.661 LIB libspdk_event_nbd.a 00:04:39.661 SO libspdk_event_scsi.so.5.0 00:04:39.661 SO libspdk_event_nbd.so.5.0 00:04:39.661 SO libspdk_event_ublk.so.2.0 00:04:39.661 SYMLINK libspdk_event_nbd.so 00:04:39.661 SYMLINK libspdk_event_scsi.so 00:04:39.661 SYMLINK libspdk_event_ublk.so 00:04:39.661 LIB libspdk_event_nvmf.a 00:04:39.661 SO libspdk_event_nvmf.so.5.0 00:04:39.919 SYMLINK libspdk_event_nvmf.so 00:04:39.919 CC module/event/subsystems/iscsi/iscsi.o 00:04:39.919 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:39.919 LIB libspdk_event_vhost_scsi.a 00:04:39.919 LIB libspdk_event_iscsi.a 00:04:39.919 SO libspdk_event_vhost_scsi.so.2.0 00:04:39.919 SO libspdk_event_iscsi.so.5.0 00:04:40.178 SYMLINK libspdk_event_vhost_scsi.so 00:04:40.178 SYMLINK libspdk_event_iscsi.so 00:04:40.178 SO libspdk.so.5.0 00:04:40.178 SYMLINK libspdk.so 00:04:40.436 CXX app/trace/trace.o 00:04:40.436 CC examples/nvme/hello_world/hello_world.o 00:04:40.436 CC examples/sock/hello_world/hello_sock.o 00:04:40.436 CC examples/ioat/perf/perf.o 00:04:40.436 CC examples/accel/perf/accel_perf.o 00:04:40.436 CC examples/blob/hello_world/hello_blob.o 00:04:40.436 CC examples/vmd/lsvmd/lsvmd.o 00:04:40.436 CC examples/bdev/hello_world/hello_bdev.o 00:04:40.436 CC test/accel/dif/dif.o 00:04:40.436 CC examples/nvmf/nvmf/nvmf.o 00:04:40.695 LINK lsvmd 00:04:40.695 LINK hello_world 00:04:40.695 LINK ioat_perf 00:04:40.695 LINK hello_bdev 00:04:40.695 LINK hello_blob 00:04:40.695 LINK hello_sock 00:04:40.695 CC examples/vmd/led/led.o 00:04:40.954 LINK spdk_trace 00:04:40.954 LINK nvmf 00:04:40.954 CC examples/ioat/verify/verify.o 00:04:40.954 CC examples/nvme/reconnect/reconnect.o 00:04:40.954 LINK dif 00:04:40.954 LINK accel_perf 00:04:40.954 CC app/trace_record/trace_record.o 00:04:40.954 LINK led 00:04:40.954 CC examples/blob/cli/blobcli.o 00:04:40.954 CC examples/bdev/bdevperf/bdevperf.o 00:04:41.212 CC app/nvmf_tgt/nvmf_main.o 00:04:41.212 LINK verify 00:04:41.212 CC test/app/bdev_svc/bdev_svc.o 00:04:41.212 LINK spdk_trace_record 00:04:41.212 CC app/iscsi_tgt/iscsi_tgt.o 00:04:41.212 CC test/blobfs/mkfs/mkfs.o 00:04:41.212 CC test/bdev/bdevio/bdevio.o 00:04:41.212 LINK reconnect 00:04:41.212 LINK nvmf_tgt 00:04:41.472 LINK bdev_svc 00:04:41.472 TEST_HEADER include/spdk/accel.h 00:04:41.472 TEST_HEADER include/spdk/accel_module.h 00:04:41.472 TEST_HEADER include/spdk/assert.h 00:04:41.472 TEST_HEADER include/spdk/barrier.h 00:04:41.472 TEST_HEADER include/spdk/base64.h 00:04:41.472 TEST_HEADER include/spdk/bdev.h 00:04:41.472 TEST_HEADER include/spdk/bdev_module.h 00:04:41.472 TEST_HEADER include/spdk/bdev_zone.h 00:04:41.472 TEST_HEADER include/spdk/bit_array.h 00:04:41.472 TEST_HEADER include/spdk/bit_pool.h 00:04:41.472 TEST_HEADER include/spdk/blob_bdev.h 00:04:41.472 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:41.472 TEST_HEADER include/spdk/blobfs.h 00:04:41.472 TEST_HEADER include/spdk/blob.h 00:04:41.472 TEST_HEADER include/spdk/conf.h 00:04:41.472 TEST_HEADER include/spdk/config.h 00:04:41.472 CC app/spdk_tgt/spdk_tgt.o 00:04:41.472 TEST_HEADER include/spdk/cpuset.h 00:04:41.472 TEST_HEADER include/spdk/crc16.h 00:04:41.472 TEST_HEADER include/spdk/crc32.h 00:04:41.472 TEST_HEADER include/spdk/crc64.h 00:04:41.472 TEST_HEADER include/spdk/dif.h 00:04:41.472 TEST_HEADER include/spdk/dma.h 00:04:41.472 TEST_HEADER include/spdk/endian.h 00:04:41.472 TEST_HEADER include/spdk/env_dpdk.h 00:04:41.472 TEST_HEADER include/spdk/env.h 00:04:41.472 TEST_HEADER include/spdk/event.h 00:04:41.472 TEST_HEADER include/spdk/fd_group.h 00:04:41.472 TEST_HEADER include/spdk/fd.h 00:04:41.472 TEST_HEADER include/spdk/file.h 00:04:41.472 TEST_HEADER include/spdk/ftl.h 00:04:41.472 TEST_HEADER include/spdk/gpt_spec.h 00:04:41.472 TEST_HEADER include/spdk/hexlify.h 00:04:41.472 TEST_HEADER include/spdk/histogram_data.h 00:04:41.472 TEST_HEADER include/spdk/idxd.h 00:04:41.472 TEST_HEADER include/spdk/idxd_spec.h 00:04:41.472 TEST_HEADER include/spdk/init.h 00:04:41.472 TEST_HEADER include/spdk/ioat.h 00:04:41.472 TEST_HEADER include/spdk/ioat_spec.h 00:04:41.472 TEST_HEADER include/spdk/iscsi_spec.h 00:04:41.472 TEST_HEADER include/spdk/json.h 00:04:41.472 TEST_HEADER include/spdk/jsonrpc.h 00:04:41.472 TEST_HEADER include/spdk/likely.h 00:04:41.472 TEST_HEADER include/spdk/log.h 00:04:41.472 TEST_HEADER include/spdk/lvol.h 00:04:41.472 LINK mkfs 00:04:41.472 TEST_HEADER include/spdk/memory.h 00:04:41.472 TEST_HEADER include/spdk/mmio.h 00:04:41.472 TEST_HEADER include/spdk/nbd.h 00:04:41.472 TEST_HEADER include/spdk/notify.h 00:04:41.472 TEST_HEADER include/spdk/nvme.h 00:04:41.472 TEST_HEADER include/spdk/nvme_intel.h 00:04:41.472 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:41.472 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:41.472 TEST_HEADER include/spdk/nvme_spec.h 00:04:41.472 TEST_HEADER include/spdk/nvme_zns.h 00:04:41.472 LINK iscsi_tgt 00:04:41.472 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:41.472 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:41.472 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:41.472 TEST_HEADER include/spdk/nvmf.h 00:04:41.472 TEST_HEADER include/spdk/nvmf_spec.h 00:04:41.472 TEST_HEADER include/spdk/nvmf_transport.h 00:04:41.472 TEST_HEADER include/spdk/opal.h 00:04:41.472 TEST_HEADER include/spdk/opal_spec.h 00:04:41.472 LINK blobcli 00:04:41.472 TEST_HEADER include/spdk/pci_ids.h 00:04:41.472 TEST_HEADER include/spdk/pipe.h 00:04:41.472 TEST_HEADER include/spdk/queue.h 00:04:41.472 TEST_HEADER include/spdk/reduce.h 00:04:41.472 TEST_HEADER include/spdk/rpc.h 00:04:41.472 TEST_HEADER include/spdk/scheduler.h 00:04:41.472 TEST_HEADER include/spdk/scsi.h 00:04:41.472 TEST_HEADER include/spdk/scsi_spec.h 00:04:41.472 TEST_HEADER include/spdk/sock.h 00:04:41.472 TEST_HEADER include/spdk/stdinc.h 00:04:41.472 TEST_HEADER include/spdk/string.h 00:04:41.472 TEST_HEADER include/spdk/thread.h 00:04:41.472 TEST_HEADER include/spdk/trace.h 00:04:41.472 TEST_HEADER include/spdk/trace_parser.h 00:04:41.472 TEST_HEADER include/spdk/tree.h 00:04:41.472 TEST_HEADER include/spdk/ublk.h 00:04:41.472 TEST_HEADER include/spdk/util.h 00:04:41.472 TEST_HEADER include/spdk/uuid.h 00:04:41.472 TEST_HEADER include/spdk/version.h 00:04:41.472 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:41.472 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:41.472 TEST_HEADER include/spdk/vhost.h 00:04:41.472 TEST_HEADER include/spdk/vmd.h 00:04:41.472 TEST_HEADER include/spdk/xor.h 00:04:41.472 TEST_HEADER include/spdk/zipf.h 00:04:41.472 CXX test/cpp_headers/accel.o 00:04:41.731 LINK spdk_tgt 00:04:41.731 CC examples/util/zipf/zipf.o 00:04:41.731 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:41.731 LINK bdevio 00:04:41.731 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:41.731 CC app/spdk_lspci/spdk_lspci.o 00:04:41.731 CXX test/cpp_headers/accel_module.o 00:04:41.731 LINK zipf 00:04:41.731 CC examples/thread/thread/thread_ex.o 00:04:41.990 LINK bdevperf 00:04:41.990 LINK spdk_lspci 00:04:41.990 LINK nvme_manage 00:04:41.990 CXX test/cpp_headers/assert.o 00:04:41.990 CC examples/idxd/perf/perf.o 00:04:41.990 CC examples/nvme/arbitration/arbitration.o 00:04:41.990 LINK nvme_fuzz 00:04:41.990 CXX test/cpp_headers/barrier.o 00:04:41.990 CC app/spdk_nvme_identify/identify.o 00:04:41.990 CC app/spdk_nvme_perf/perf.o 00:04:41.990 LINK thread 00:04:42.249 CC examples/nvme/hotplug/hotplug.o 00:04:42.249 CXX test/cpp_headers/base64.o 00:04:42.249 LINK idxd_perf 00:04:42.249 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:42.249 CC examples/nvme/abort/abort.o 00:04:42.507 CXX test/cpp_headers/bdev.o 00:04:42.507 LINK arbitration 00:04:42.507 LINK hotplug 00:04:42.507 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:42.507 LINK cmb_copy 00:04:42.507 CXX test/cpp_headers/bdev_module.o 00:04:42.507 LINK pmr_persistence 00:04:42.766 CC test/dma/test_dma/test_dma.o 00:04:42.766 CC test/env/mem_callbacks/mem_callbacks.o 00:04:42.766 LINK abort 00:04:42.766 CC test/event/event_perf/event_perf.o 00:04:42.766 CXX test/cpp_headers/bdev_zone.o 00:04:42.766 LINK spdk_nvme_identify 00:04:43.024 LINK spdk_nvme_perf 00:04:43.024 LINK event_perf 00:04:43.024 CXX test/cpp_headers/bit_array.o 00:04:43.024 CC test/lvol/esnap/esnap.o 00:04:43.024 CXX test/cpp_headers/bit_pool.o 00:04:43.024 CC test/nvme/aer/aer.o 00:04:43.283 LINK test_dma 00:04:43.283 CC test/event/reactor/reactor.o 00:04:43.283 CC app/spdk_nvme_discover/discovery_aer.o 00:04:43.283 CXX test/cpp_headers/blob_bdev.o 00:04:43.283 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:43.283 LINK reactor 00:04:43.283 LINK aer 00:04:43.542 LINK iscsi_fuzz 00:04:43.543 LINK spdk_nvme_discover 00:04:43.543 CXX test/cpp_headers/blobfs_bdev.o 00:04:43.543 CC test/event/reactor_perf/reactor_perf.o 00:04:43.543 CXX test/cpp_headers/blobfs.o 00:04:43.543 LINK mem_callbacks 00:04:43.543 LINK interrupt_tgt 00:04:43.543 CC test/nvme/reset/reset.o 00:04:43.543 LINK reactor_perf 00:04:43.801 CXX test/cpp_headers/blob.o 00:04:43.801 CC app/spdk_top/spdk_top.o 00:04:43.801 CXX test/cpp_headers/conf.o 00:04:43.801 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:43.801 CC test/env/vtophys/vtophys.o 00:04:43.801 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:43.801 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:43.801 CC test/event/app_repeat/app_repeat.o 00:04:43.801 LINK reset 00:04:43.801 CXX test/cpp_headers/config.o 00:04:43.801 LINK vtophys 00:04:43.801 LINK env_dpdk_post_init 00:04:44.060 CXX test/cpp_headers/cpuset.o 00:04:44.060 CC test/event/scheduler/scheduler.o 00:04:44.060 LINK app_repeat 00:04:44.060 CXX test/cpp_headers/crc16.o 00:04:44.060 CC test/app/histogram_perf/histogram_perf.o 00:04:44.060 CC test/nvme/sgl/sgl.o 00:04:44.060 CC test/env/memory/memory_ut.o 00:04:44.319 LINK scheduler 00:04:44.319 CXX test/cpp_headers/crc32.o 00:04:44.319 LINK vhost_fuzz 00:04:44.319 LINK histogram_perf 00:04:44.319 CC test/nvme/e2edp/nvme_dp.o 00:04:44.319 LINK sgl 00:04:44.577 CC test/app/jsoncat/jsoncat.o 00:04:44.577 CXX test/cpp_headers/crc64.o 00:04:44.577 CC test/rpc_client/rpc_client_test.o 00:04:44.577 CC test/app/stub/stub.o 00:04:44.577 LINK spdk_top 00:04:44.577 LINK jsoncat 00:04:44.577 CC test/env/pci/pci_ut.o 00:04:44.835 LINK nvme_dp 00:04:44.835 CXX test/cpp_headers/dif.o 00:04:44.835 LINK rpc_client_test 00:04:44.835 LINK stub 00:04:44.835 CXX test/cpp_headers/dma.o 00:04:45.094 CC app/spdk_dd/spdk_dd.o 00:04:45.094 CXX test/cpp_headers/endian.o 00:04:45.094 CC test/nvme/overhead/overhead.o 00:04:45.094 CC app/vhost/vhost.o 00:04:45.094 CXX test/cpp_headers/env_dpdk.o 00:04:45.094 LINK memory_ut 00:04:45.094 LINK pci_ut 00:04:45.094 LINK vhost 00:04:45.353 CXX test/cpp_headers/env.o 00:04:45.353 CC app/fio/nvme/fio_plugin.o 00:04:45.353 CC test/nvme/err_injection/err_injection.o 00:04:45.353 LINK spdk_dd 00:04:45.353 CC test/nvme/startup/startup.o 00:04:45.353 LINK overhead 00:04:45.353 CXX test/cpp_headers/event.o 00:04:45.353 CXX test/cpp_headers/fd_group.o 00:04:45.353 LINK err_injection 00:04:45.611 CC test/thread/poller_perf/poller_perf.o 00:04:45.611 LINK startup 00:04:45.611 CXX test/cpp_headers/fd.o 00:04:45.611 CC test/nvme/reserve/reserve.o 00:04:45.611 CXX test/cpp_headers/file.o 00:04:45.611 CC test/nvme/simple_copy/simple_copy.o 00:04:45.611 LINK poller_perf 00:04:45.611 CC app/fio/bdev/fio_plugin.o 00:04:45.870 CXX test/cpp_headers/ftl.o 00:04:45.870 CC test/nvme/connect_stress/connect_stress.o 00:04:45.870 LINK spdk_nvme 00:04:45.870 CXX test/cpp_headers/gpt_spec.o 00:04:45.870 LINK reserve 00:04:45.870 CXX test/cpp_headers/hexlify.o 00:04:46.129 LINK simple_copy 00:04:46.129 CXX test/cpp_headers/histogram_data.o 00:04:46.129 LINK connect_stress 00:04:46.129 CC test/nvme/boot_partition/boot_partition.o 00:04:46.129 CXX test/cpp_headers/idxd.o 00:04:46.129 CC test/nvme/fused_ordering/fused_ordering.o 00:04:46.388 CC test/nvme/compliance/nvme_compliance.o 00:04:46.388 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:46.388 LINK boot_partition 00:04:46.388 LINK fused_ordering 00:04:46.388 CXX test/cpp_headers/idxd_spec.o 00:04:46.388 LINK spdk_bdev 00:04:46.647 CC test/nvme/cuse/cuse.o 00:04:46.647 CC test/nvme/fdp/fdp.o 00:04:46.647 CXX test/cpp_headers/init.o 00:04:46.647 LINK nvme_compliance 00:04:46.647 CXX test/cpp_headers/ioat.o 00:04:46.647 CXX test/cpp_headers/ioat_spec.o 00:04:46.647 LINK doorbell_aers 00:04:46.647 CXX test/cpp_headers/iscsi_spec.o 00:04:46.907 CXX test/cpp_headers/json.o 00:04:46.907 CXX test/cpp_headers/jsonrpc.o 00:04:46.907 CXX test/cpp_headers/likely.o 00:04:46.907 CXX test/cpp_headers/log.o 00:04:46.907 CXX test/cpp_headers/lvol.o 00:04:46.907 CXX test/cpp_headers/memory.o 00:04:46.907 LINK fdp 00:04:47.166 CXX test/cpp_headers/mmio.o 00:04:47.166 CXX test/cpp_headers/nbd.o 00:04:47.166 CXX test/cpp_headers/notify.o 00:04:47.166 CXX test/cpp_headers/nvme.o 00:04:47.166 CXX test/cpp_headers/nvme_intel.o 00:04:47.166 CXX test/cpp_headers/nvme_ocssd.o 00:04:47.166 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:47.166 CXX test/cpp_headers/nvme_spec.o 00:04:47.166 CXX test/cpp_headers/nvme_zns.o 00:04:47.166 CXX test/cpp_headers/nvmf_cmd.o 00:04:47.166 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:47.166 CXX test/cpp_headers/nvmf.o 00:04:47.424 CXX test/cpp_headers/nvmf_spec.o 00:04:47.424 CXX test/cpp_headers/nvmf_transport.o 00:04:47.424 CXX test/cpp_headers/opal.o 00:04:47.424 CXX test/cpp_headers/opal_spec.o 00:04:47.424 CXX test/cpp_headers/pipe.o 00:04:47.424 CXX test/cpp_headers/pci_ids.o 00:04:47.424 CXX test/cpp_headers/queue.o 00:04:47.424 CXX test/cpp_headers/reduce.o 00:04:47.424 CXX test/cpp_headers/rpc.o 00:04:47.682 CXX test/cpp_headers/scheduler.o 00:04:47.682 CXX test/cpp_headers/scsi.o 00:04:47.682 CXX test/cpp_headers/scsi_spec.o 00:04:47.682 CXX test/cpp_headers/sock.o 00:04:47.682 CXX test/cpp_headers/stdinc.o 00:04:47.682 LINK cuse 00:04:47.682 CXX test/cpp_headers/string.o 00:04:47.682 CXX test/cpp_headers/thread.o 00:04:47.682 CXX test/cpp_headers/trace.o 00:04:47.682 CXX test/cpp_headers/trace_parser.o 00:04:47.941 CXX test/cpp_headers/tree.o 00:04:47.941 CXX test/cpp_headers/ublk.o 00:04:47.941 CXX test/cpp_headers/util.o 00:04:47.941 CXX test/cpp_headers/uuid.o 00:04:47.941 CXX test/cpp_headers/version.o 00:04:47.941 CXX test/cpp_headers/vfio_user_pci.o 00:04:47.941 CXX test/cpp_headers/vfio_user_spec.o 00:04:47.941 CXX test/cpp_headers/vhost.o 00:04:47.941 CXX test/cpp_headers/vmd.o 00:04:47.941 CXX test/cpp_headers/xor.o 00:04:47.941 CXX test/cpp_headers/zipf.o 00:04:48.199 LINK esnap 00:04:49.134 00:04:49.134 real 0m50.659s 00:04:49.134 user 5m1.132s 00:04:49.134 sys 1m4.297s 00:04:49.134 13:18:54 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:04:49.134 13:18:54 -- common/autotest_common.sh@10 -- $ set +x 00:04:49.134 ************************************ 00:04:49.134 END TEST make 00:04:49.134 ************************************ 00:04:49.134 13:18:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:49.135 13:18:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:49.135 13:18:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:49.135 13:18:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:49.135 13:18:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:49.135 13:18:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:49.135 13:18:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:49.135 13:18:54 -- scripts/common.sh@335 -- # IFS=.-: 00:04:49.135 13:18:54 -- scripts/common.sh@335 -- # read -ra ver1 00:04:49.135 13:18:54 -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.135 13:18:54 -- scripts/common.sh@336 -- # read -ra ver2 00:04:49.135 13:18:54 -- scripts/common.sh@337 -- # local 'op=<' 00:04:49.135 13:18:54 -- scripts/common.sh@339 -- # ver1_l=2 00:04:49.135 13:18:54 -- scripts/common.sh@340 -- # ver2_l=1 00:04:49.135 13:18:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:49.135 13:18:54 -- scripts/common.sh@343 -- # case "$op" in 00:04:49.135 13:18:54 -- scripts/common.sh@344 -- # : 1 00:04:49.135 13:18:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:49.135 13:18:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.135 13:18:54 -- scripts/common.sh@364 -- # decimal 1 00:04:49.135 13:18:54 -- scripts/common.sh@352 -- # local d=1 00:04:49.135 13:18:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.135 13:18:54 -- scripts/common.sh@354 -- # echo 1 00:04:49.135 13:18:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:49.135 13:18:54 -- scripts/common.sh@365 -- # decimal 2 00:04:49.135 13:18:54 -- scripts/common.sh@352 -- # local d=2 00:04:49.135 13:18:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.135 13:18:54 -- scripts/common.sh@354 -- # echo 2 00:04:49.135 13:18:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:49.135 13:18:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:49.135 13:18:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:49.135 13:18:54 -- scripts/common.sh@367 -- # return 0 00:04:49.135 13:18:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.135 13:18:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:49.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.135 --rc genhtml_branch_coverage=1 00:04:49.135 --rc genhtml_function_coverage=1 00:04:49.135 --rc genhtml_legend=1 00:04:49.135 --rc geninfo_all_blocks=1 00:04:49.135 --rc geninfo_unexecuted_blocks=1 00:04:49.135 00:04:49.135 ' 00:04:49.135 13:18:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:49.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.135 --rc genhtml_branch_coverage=1 00:04:49.135 --rc genhtml_function_coverage=1 00:04:49.135 --rc genhtml_legend=1 00:04:49.135 --rc geninfo_all_blocks=1 00:04:49.135 --rc geninfo_unexecuted_blocks=1 00:04:49.135 00:04:49.135 ' 00:04:49.135 13:18:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:49.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.135 --rc genhtml_branch_coverage=1 00:04:49.135 --rc genhtml_function_coverage=1 00:04:49.135 --rc genhtml_legend=1 00:04:49.135 --rc geninfo_all_blocks=1 00:04:49.135 --rc geninfo_unexecuted_blocks=1 00:04:49.135 00:04:49.135 ' 00:04:49.135 13:18:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:49.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.135 --rc genhtml_branch_coverage=1 00:04:49.135 --rc genhtml_function_coverage=1 00:04:49.135 --rc genhtml_legend=1 00:04:49.135 --rc geninfo_all_blocks=1 00:04:49.135 --rc geninfo_unexecuted_blocks=1 00:04:49.135 00:04:49.135 ' 00:04:49.135 13:18:54 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:49.135 13:18:54 -- nvmf/common.sh@7 -- # uname -s 00:04:49.135 13:18:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:49.135 13:18:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:49.135 13:18:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:49.135 13:18:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:49.135 13:18:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:49.135 13:18:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:49.135 13:18:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:49.135 13:18:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:49.135 13:18:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:49.135 13:18:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:49.394 13:18:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:04:49.394 13:18:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:04:49.394 13:18:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:49.394 13:18:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:49.394 13:18:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:49.394 13:18:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:49.394 13:18:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:49.394 13:18:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:49.394 13:18:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:49.394 13:18:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.394 13:18:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.394 13:18:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.394 13:18:54 -- paths/export.sh@5 -- # export PATH 00:04:49.394 13:18:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.394 13:18:54 -- nvmf/common.sh@46 -- # : 0 00:04:49.394 13:18:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:49.394 13:18:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:49.394 13:18:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:49.394 13:18:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:49.394 13:18:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:49.394 13:18:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:49.394 13:18:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:49.394 13:18:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:49.394 13:18:54 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:49.394 13:18:54 -- spdk/autotest.sh@32 -- # uname -s 00:04:49.394 13:18:54 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:49.394 13:18:54 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:49.394 13:18:54 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:49.394 13:18:54 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:49.394 13:18:54 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:49.394 13:18:54 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:49.394 13:18:54 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:49.394 13:18:54 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:49.394 13:18:54 -- spdk/autotest.sh@48 -- # udevadm_pid=61813 00:04:49.394 13:18:54 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:49.394 13:18:54 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:49.394 13:18:54 -- spdk/autotest.sh@54 -- # echo 61816 00:04:49.394 13:18:54 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:49.394 13:18:54 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:49.394 13:18:54 -- spdk/autotest.sh@56 -- # echo 61818 00:04:49.394 13:18:54 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:49.394 13:18:54 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:49.394 13:18:54 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:49.394 13:18:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:49.394 13:18:54 -- common/autotest_common.sh@10 -- # set +x 00:04:49.394 13:18:54 -- spdk/autotest.sh@70 -- # create_test_list 00:04:49.394 13:18:54 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:49.394 13:18:54 -- common/autotest_common.sh@10 -- # set +x 00:04:49.394 13:18:54 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:49.394 13:18:54 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:49.394 13:18:54 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:49.394 13:18:54 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:49.394 13:18:54 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:49.394 13:18:54 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:49.394 13:18:54 -- common/autotest_common.sh@1450 -- # uname 00:04:49.394 13:18:54 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:04:49.394 13:18:54 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:49.394 13:18:54 -- common/autotest_common.sh@1470 -- # uname 00:04:49.394 13:18:54 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:04:49.394 13:18:54 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:04:49.394 13:18:54 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:49.394 lcov: LCOV version 1.15 00:04:49.394 13:18:55 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:57.507 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:57.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:57.507 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:57.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:57.507 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:57.507 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:15.589 13:19:19 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:05:15.589 13:19:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:15.589 13:19:19 -- common/autotest_common.sh@10 -- # set +x 00:05:15.589 13:19:19 -- spdk/autotest.sh@89 -- # rm -f 00:05:15.589 13:19:19 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:15.589 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:15.589 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:05:15.589 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:05:15.589 13:19:20 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:05:15.589 13:19:20 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:15.589 13:19:20 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:15.589 13:19:20 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:15.589 13:19:20 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:15.589 13:19:20 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:15.589 13:19:20 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:15.589 13:19:20 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:15.589 13:19:20 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:15.589 13:19:20 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:15.589 13:19:20 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:15.589 13:19:20 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:15.589 13:19:20 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:15.589 13:19:20 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:15.589 13:19:20 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:15.589 13:19:20 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:15.589 13:19:20 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:15.589 13:19:20 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:15.589 13:19:20 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:15.589 13:19:20 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:15.589 13:19:20 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:15.589 13:19:20 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:15.589 13:19:20 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:15.589 13:19:20 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:15.589 13:19:20 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:05:15.589 13:19:20 -- spdk/autotest.sh@108 -- # grep -v p 00:05:15.589 13:19:20 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:05:15.589 13:19:20 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:15.589 13:19:20 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:15.589 13:19:20 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:05:15.589 13:19:20 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:05:15.589 13:19:20 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:15.589 No valid GPT data, bailing 00:05:15.589 13:19:20 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:15.589 13:19:20 -- scripts/common.sh@393 -- # pt= 00:05:15.589 13:19:20 -- scripts/common.sh@394 -- # return 1 00:05:15.589 13:19:20 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:15.589 1+0 records in 00:05:15.589 1+0 records out 00:05:15.589 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00488779 s, 215 MB/s 00:05:15.589 13:19:20 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:15.589 13:19:20 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:15.589 13:19:20 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:05:15.589 13:19:20 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:05:15.589 13:19:20 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:15.589 No valid GPT data, bailing 00:05:15.589 13:19:20 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:15.589 13:19:20 -- scripts/common.sh@393 -- # pt= 00:05:15.589 13:19:20 -- scripts/common.sh@394 -- # return 1 00:05:15.589 13:19:20 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:15.589 1+0 records in 00:05:15.589 1+0 records out 00:05:15.589 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00451702 s, 232 MB/s 00:05:15.589 13:19:20 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:15.589 13:19:20 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:15.589 13:19:20 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:05:15.589 13:19:20 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:05:15.589 13:19:20 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:15.589 No valid GPT data, bailing 00:05:15.589 13:19:20 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:15.589 13:19:20 -- scripts/common.sh@393 -- # pt= 00:05:15.589 13:19:20 -- scripts/common.sh@394 -- # return 1 00:05:15.589 13:19:20 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:15.589 1+0 records in 00:05:15.589 1+0 records out 00:05:15.589 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00459274 s, 228 MB/s 00:05:15.589 13:19:20 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:15.589 13:19:20 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:15.589 13:19:20 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:05:15.589 13:19:20 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:05:15.589 13:19:20 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:15.589 No valid GPT data, bailing 00:05:15.589 13:19:20 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:15.589 13:19:20 -- scripts/common.sh@393 -- # pt= 00:05:15.589 13:19:20 -- scripts/common.sh@394 -- # return 1 00:05:15.589 13:19:20 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:15.589 1+0 records in 00:05:15.589 1+0 records out 00:05:15.589 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0046161 s, 227 MB/s 00:05:15.589 13:19:20 -- spdk/autotest.sh@116 -- # sync 00:05:15.589 13:19:21 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:15.589 13:19:21 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:15.589 13:19:21 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:17.489 13:19:22 -- spdk/autotest.sh@122 -- # uname -s 00:05:17.489 13:19:22 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:05:17.489 13:19:22 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:17.489 13:19:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.489 13:19:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.489 13:19:22 -- common/autotest_common.sh@10 -- # set +x 00:05:17.489 ************************************ 00:05:17.489 START TEST setup.sh 00:05:17.489 ************************************ 00:05:17.489 13:19:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:17.489 * Looking for test storage... 00:05:17.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:17.489 13:19:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:17.489 13:19:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:17.489 13:19:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:17.489 13:19:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:17.490 13:19:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:17.490 13:19:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:17.490 13:19:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:17.490 13:19:23 -- scripts/common.sh@335 -- # IFS=.-: 00:05:17.490 13:19:23 -- scripts/common.sh@335 -- # read -ra ver1 00:05:17.490 13:19:23 -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.490 13:19:23 -- scripts/common.sh@336 -- # read -ra ver2 00:05:17.490 13:19:23 -- scripts/common.sh@337 -- # local 'op=<' 00:05:17.490 13:19:23 -- scripts/common.sh@339 -- # ver1_l=2 00:05:17.490 13:19:23 -- scripts/common.sh@340 -- # ver2_l=1 00:05:17.490 13:19:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:17.490 13:19:23 -- scripts/common.sh@343 -- # case "$op" in 00:05:17.490 13:19:23 -- scripts/common.sh@344 -- # : 1 00:05:17.490 13:19:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:17.490 13:19:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.490 13:19:23 -- scripts/common.sh@364 -- # decimal 1 00:05:17.490 13:19:23 -- scripts/common.sh@352 -- # local d=1 00:05:17.490 13:19:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.490 13:19:23 -- scripts/common.sh@354 -- # echo 1 00:05:17.490 13:19:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:17.490 13:19:23 -- scripts/common.sh@365 -- # decimal 2 00:05:17.490 13:19:23 -- scripts/common.sh@352 -- # local d=2 00:05:17.490 13:19:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.490 13:19:23 -- scripts/common.sh@354 -- # echo 2 00:05:17.490 13:19:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:17.490 13:19:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:17.490 13:19:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:17.490 13:19:23 -- scripts/common.sh@367 -- # return 0 00:05:17.490 13:19:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.490 13:19:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:17.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.490 --rc genhtml_branch_coverage=1 00:05:17.490 --rc genhtml_function_coverage=1 00:05:17.490 --rc genhtml_legend=1 00:05:17.490 --rc geninfo_all_blocks=1 00:05:17.490 --rc geninfo_unexecuted_blocks=1 00:05:17.490 00:05:17.490 ' 00:05:17.490 13:19:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:17.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.490 --rc genhtml_branch_coverage=1 00:05:17.490 --rc genhtml_function_coverage=1 00:05:17.490 --rc genhtml_legend=1 00:05:17.490 --rc geninfo_all_blocks=1 00:05:17.490 --rc geninfo_unexecuted_blocks=1 00:05:17.490 00:05:17.490 ' 00:05:17.490 13:19:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:17.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.490 --rc genhtml_branch_coverage=1 00:05:17.490 --rc genhtml_function_coverage=1 00:05:17.490 --rc genhtml_legend=1 00:05:17.490 --rc geninfo_all_blocks=1 00:05:17.490 --rc geninfo_unexecuted_blocks=1 00:05:17.490 00:05:17.490 ' 00:05:17.490 13:19:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:17.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.490 --rc genhtml_branch_coverage=1 00:05:17.490 --rc genhtml_function_coverage=1 00:05:17.490 --rc genhtml_legend=1 00:05:17.490 --rc geninfo_all_blocks=1 00:05:17.490 --rc geninfo_unexecuted_blocks=1 00:05:17.490 00:05:17.490 ' 00:05:17.490 13:19:23 -- setup/test-setup.sh@10 -- # uname -s 00:05:17.490 13:19:23 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:17.490 13:19:23 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:17.490 13:19:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.490 13:19:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.490 13:19:23 -- common/autotest_common.sh@10 -- # set +x 00:05:17.490 ************************************ 00:05:17.490 START TEST acl 00:05:17.490 ************************************ 00:05:17.490 13:19:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:17.750 * Looking for test storage... 00:05:17.750 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:17.750 13:19:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:17.750 13:19:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:17.750 13:19:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:17.750 13:19:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:17.750 13:19:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:17.750 13:19:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:17.750 13:19:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:17.750 13:19:23 -- scripts/common.sh@335 -- # IFS=.-: 00:05:17.750 13:19:23 -- scripts/common.sh@335 -- # read -ra ver1 00:05:17.750 13:19:23 -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.750 13:19:23 -- scripts/common.sh@336 -- # read -ra ver2 00:05:17.750 13:19:23 -- scripts/common.sh@337 -- # local 'op=<' 00:05:17.750 13:19:23 -- scripts/common.sh@339 -- # ver1_l=2 00:05:17.750 13:19:23 -- scripts/common.sh@340 -- # ver2_l=1 00:05:17.750 13:19:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:17.750 13:19:23 -- scripts/common.sh@343 -- # case "$op" in 00:05:17.750 13:19:23 -- scripts/common.sh@344 -- # : 1 00:05:17.750 13:19:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:17.750 13:19:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.750 13:19:23 -- scripts/common.sh@364 -- # decimal 1 00:05:17.750 13:19:23 -- scripts/common.sh@352 -- # local d=1 00:05:17.750 13:19:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.750 13:19:23 -- scripts/common.sh@354 -- # echo 1 00:05:17.750 13:19:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:17.750 13:19:23 -- scripts/common.sh@365 -- # decimal 2 00:05:17.750 13:19:23 -- scripts/common.sh@352 -- # local d=2 00:05:17.750 13:19:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.750 13:19:23 -- scripts/common.sh@354 -- # echo 2 00:05:17.750 13:19:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:17.750 13:19:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:17.750 13:19:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:17.750 13:19:23 -- scripts/common.sh@367 -- # return 0 00:05:17.750 13:19:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.750 13:19:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:17.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.750 --rc genhtml_branch_coverage=1 00:05:17.750 --rc genhtml_function_coverage=1 00:05:17.750 --rc genhtml_legend=1 00:05:17.750 --rc geninfo_all_blocks=1 00:05:17.750 --rc geninfo_unexecuted_blocks=1 00:05:17.750 00:05:17.750 ' 00:05:17.750 13:19:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:17.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.750 --rc genhtml_branch_coverage=1 00:05:17.750 --rc genhtml_function_coverage=1 00:05:17.750 --rc genhtml_legend=1 00:05:17.750 --rc geninfo_all_blocks=1 00:05:17.750 --rc geninfo_unexecuted_blocks=1 00:05:17.750 00:05:17.750 ' 00:05:17.750 13:19:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:17.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.750 --rc genhtml_branch_coverage=1 00:05:17.750 --rc genhtml_function_coverage=1 00:05:17.750 --rc genhtml_legend=1 00:05:17.750 --rc geninfo_all_blocks=1 00:05:17.750 --rc geninfo_unexecuted_blocks=1 00:05:17.750 00:05:17.750 ' 00:05:17.750 13:19:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:17.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.750 --rc genhtml_branch_coverage=1 00:05:17.750 --rc genhtml_function_coverage=1 00:05:17.750 --rc genhtml_legend=1 00:05:17.750 --rc geninfo_all_blocks=1 00:05:17.750 --rc geninfo_unexecuted_blocks=1 00:05:17.750 00:05:17.750 ' 00:05:17.750 13:19:23 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:17.750 13:19:23 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:17.750 13:19:23 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:17.750 13:19:23 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:17.750 13:19:23 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:17.750 13:19:23 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:17.750 13:19:23 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:17.750 13:19:23 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:17.750 13:19:23 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:17.750 13:19:23 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:17.750 13:19:23 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:17.750 13:19:23 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:17.750 13:19:23 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:17.750 13:19:23 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:17.750 13:19:23 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:17.750 13:19:23 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:17.750 13:19:23 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:17.750 13:19:23 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:17.750 13:19:23 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:17.750 13:19:23 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:17.750 13:19:23 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:17.750 13:19:23 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:17.750 13:19:23 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:17.750 13:19:23 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:17.750 13:19:23 -- setup/acl.sh@12 -- # devs=() 00:05:17.750 13:19:23 -- setup/acl.sh@12 -- # declare -a devs 00:05:17.750 13:19:23 -- setup/acl.sh@13 -- # drivers=() 00:05:17.750 13:19:23 -- setup/acl.sh@13 -- # declare -A drivers 00:05:17.750 13:19:23 -- setup/acl.sh@51 -- # setup reset 00:05:17.750 13:19:23 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:17.750 13:19:23 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:18.686 13:19:24 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:18.686 13:19:24 -- setup/acl.sh@16 -- # local dev driver 00:05:18.686 13:19:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:18.686 13:19:24 -- setup/acl.sh@15 -- # setup output status 00:05:18.686 13:19:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.686 13:19:24 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:18.686 Hugepages 00:05:18.686 node hugesize free / total 00:05:18.686 13:19:24 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:18.686 13:19:24 -- setup/acl.sh@19 -- # continue 00:05:18.686 13:19:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:18.686 00:05:18.686 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:18.686 13:19:24 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:18.686 13:19:24 -- setup/acl.sh@19 -- # continue 00:05:18.686 13:19:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:18.686 13:19:24 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:18.686 13:19:24 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:18.686 13:19:24 -- setup/acl.sh@20 -- # continue 00:05:18.686 13:19:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:18.686 13:19:24 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:05:18.686 13:19:24 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:18.686 13:19:24 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:18.686 13:19:24 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:18.686 13:19:24 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:18.686 13:19:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:18.945 13:19:24 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:05:18.945 13:19:24 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:18.945 13:19:24 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:18.945 13:19:24 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:18.945 13:19:24 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:18.945 13:19:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:18.945 13:19:24 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:18.945 13:19:24 -- setup/acl.sh@54 -- # run_test denied denied 00:05:18.945 13:19:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.945 13:19:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.945 13:19:24 -- common/autotest_common.sh@10 -- # set +x 00:05:18.945 ************************************ 00:05:18.945 START TEST denied 00:05:18.945 ************************************ 00:05:18.945 13:19:24 -- common/autotest_common.sh@1114 -- # denied 00:05:18.945 13:19:24 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:05:18.945 13:19:24 -- setup/acl.sh@38 -- # setup output config 00:05:18.945 13:19:24 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:05:18.945 13:19:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.945 13:19:24 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:19.881 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:05:19.881 13:19:25 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:05:19.881 13:19:25 -- setup/acl.sh@28 -- # local dev driver 00:05:19.881 13:19:25 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:19.881 13:19:25 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:05:19.881 13:19:25 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:05:19.881 13:19:25 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:19.881 13:19:25 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:19.881 13:19:25 -- setup/acl.sh@41 -- # setup reset 00:05:19.881 13:19:25 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:19.881 13:19:25 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:20.449 00:05:20.449 real 0m1.434s 00:05:20.449 user 0m0.576s 00:05:20.449 sys 0m0.814s 00:05:20.449 13:19:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:20.449 ************************************ 00:05:20.449 13:19:25 -- common/autotest_common.sh@10 -- # set +x 00:05:20.449 END TEST denied 00:05:20.449 ************************************ 00:05:20.449 13:19:25 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:20.449 13:19:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:20.449 13:19:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.449 13:19:25 -- common/autotest_common.sh@10 -- # set +x 00:05:20.449 ************************************ 00:05:20.449 START TEST allowed 00:05:20.449 ************************************ 00:05:20.449 13:19:25 -- common/autotest_common.sh@1114 -- # allowed 00:05:20.449 13:19:25 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:05:20.449 13:19:25 -- setup/acl.sh@45 -- # setup output config 00:05:20.449 13:19:25 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:05:20.449 13:19:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.449 13:19:25 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:21.017 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:21.017 13:19:26 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:05:21.017 13:19:26 -- setup/acl.sh@28 -- # local dev driver 00:05:21.017 13:19:26 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:21.017 13:19:26 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:05:21.017 13:19:26 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:05:21.017 13:19:26 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:21.017 13:19:26 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:21.017 13:19:26 -- setup/acl.sh@48 -- # setup reset 00:05:21.017 13:19:26 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:21.017 13:19:26 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:21.952 ************************************ 00:05:21.952 END TEST allowed 00:05:21.952 ************************************ 00:05:21.952 00:05:21.952 real 0m1.486s 00:05:21.952 user 0m0.672s 00:05:21.952 sys 0m0.817s 00:05:21.952 13:19:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:21.952 13:19:27 -- common/autotest_common.sh@10 -- # set +x 00:05:21.952 ************************************ 00:05:21.952 END TEST acl 00:05:21.952 ************************************ 00:05:21.952 00:05:21.952 real 0m4.291s 00:05:21.952 user 0m1.882s 00:05:21.952 sys 0m2.391s 00:05:21.952 13:19:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:21.952 13:19:27 -- common/autotest_common.sh@10 -- # set +x 00:05:21.952 13:19:27 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:21.952 13:19:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:21.952 13:19:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.952 13:19:27 -- common/autotest_common.sh@10 -- # set +x 00:05:21.952 ************************************ 00:05:21.952 START TEST hugepages 00:05:21.952 ************************************ 00:05:21.952 13:19:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:21.952 * Looking for test storage... 00:05:21.953 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:21.953 13:19:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:21.953 13:19:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:21.953 13:19:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:21.953 13:19:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:21.953 13:19:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:21.953 13:19:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:21.953 13:19:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:21.953 13:19:27 -- scripts/common.sh@335 -- # IFS=.-: 00:05:21.953 13:19:27 -- scripts/common.sh@335 -- # read -ra ver1 00:05:21.953 13:19:27 -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.953 13:19:27 -- scripts/common.sh@336 -- # read -ra ver2 00:05:21.953 13:19:27 -- scripts/common.sh@337 -- # local 'op=<' 00:05:21.953 13:19:27 -- scripts/common.sh@339 -- # ver1_l=2 00:05:21.953 13:19:27 -- scripts/common.sh@340 -- # ver2_l=1 00:05:21.953 13:19:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:21.953 13:19:27 -- scripts/common.sh@343 -- # case "$op" in 00:05:21.953 13:19:27 -- scripts/common.sh@344 -- # : 1 00:05:21.953 13:19:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:21.953 13:19:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.953 13:19:27 -- scripts/common.sh@364 -- # decimal 1 00:05:21.953 13:19:27 -- scripts/common.sh@352 -- # local d=1 00:05:21.953 13:19:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.953 13:19:27 -- scripts/common.sh@354 -- # echo 1 00:05:21.953 13:19:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:22.213 13:19:27 -- scripts/common.sh@365 -- # decimal 2 00:05:22.213 13:19:27 -- scripts/common.sh@352 -- # local d=2 00:05:22.213 13:19:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.213 13:19:27 -- scripts/common.sh@354 -- # echo 2 00:05:22.213 13:19:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:22.213 13:19:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:22.213 13:19:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:22.213 13:19:27 -- scripts/common.sh@367 -- # return 0 00:05:22.213 13:19:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.213 13:19:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:22.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.213 --rc genhtml_branch_coverage=1 00:05:22.213 --rc genhtml_function_coverage=1 00:05:22.213 --rc genhtml_legend=1 00:05:22.213 --rc geninfo_all_blocks=1 00:05:22.213 --rc geninfo_unexecuted_blocks=1 00:05:22.213 00:05:22.213 ' 00:05:22.213 13:19:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:22.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.213 --rc genhtml_branch_coverage=1 00:05:22.213 --rc genhtml_function_coverage=1 00:05:22.213 --rc genhtml_legend=1 00:05:22.213 --rc geninfo_all_blocks=1 00:05:22.213 --rc geninfo_unexecuted_blocks=1 00:05:22.213 00:05:22.213 ' 00:05:22.213 13:19:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:22.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.213 --rc genhtml_branch_coverage=1 00:05:22.213 --rc genhtml_function_coverage=1 00:05:22.213 --rc genhtml_legend=1 00:05:22.213 --rc geninfo_all_blocks=1 00:05:22.213 --rc geninfo_unexecuted_blocks=1 00:05:22.213 00:05:22.213 ' 00:05:22.213 13:19:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:22.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.213 --rc genhtml_branch_coverage=1 00:05:22.213 --rc genhtml_function_coverage=1 00:05:22.213 --rc genhtml_legend=1 00:05:22.213 --rc geninfo_all_blocks=1 00:05:22.213 --rc geninfo_unexecuted_blocks=1 00:05:22.213 00:05:22.213 ' 00:05:22.213 13:19:27 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:22.213 13:19:27 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:22.213 13:19:27 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:22.213 13:19:27 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:22.213 13:19:27 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:22.213 13:19:27 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:22.213 13:19:27 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:22.213 13:19:27 -- setup/common.sh@18 -- # local node= 00:05:22.213 13:19:27 -- setup/common.sh@19 -- # local var val 00:05:22.213 13:19:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.213 13:19:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.213 13:19:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.213 13:19:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.213 13:19:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.213 13:19:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.213 13:19:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 4399852 kB' 'MemAvailable: 7330268 kB' 'Buffers: 3704 kB' 'Cached: 3130024 kB' 'SwapCached: 0 kB' 'Active: 496316 kB' 'Inactive: 2754032 kB' 'Active(anon): 127132 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754032 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 118288 kB' 'Mapped: 50716 kB' 'Shmem: 10512 kB' 'KReclaimable: 88704 kB' 'Slab: 192036 kB' 'SReclaimable: 88704 kB' 'SUnreclaim: 103332 kB' 'KernelStack: 6784 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411004 kB' 'Committed_AS: 321024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 4022272 kB' 'DirectMap1G: 10485760 kB' 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.213 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.213 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # continue 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.214 13:19:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.214 13:19:27 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:22.214 13:19:27 -- setup/common.sh@33 -- # echo 2048 00:05:22.214 13:19:27 -- setup/common.sh@33 -- # return 0 00:05:22.214 13:19:27 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:22.214 13:19:27 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:22.214 13:19:27 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:22.214 13:19:27 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:22.214 13:19:27 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:22.214 13:19:27 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:22.214 13:19:27 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:22.214 13:19:27 -- setup/hugepages.sh@207 -- # get_nodes 00:05:22.214 13:19:27 -- setup/hugepages.sh@27 -- # local node 00:05:22.214 13:19:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:22.214 13:19:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:22.214 13:19:27 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:22.214 13:19:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:22.214 13:19:27 -- setup/hugepages.sh@208 -- # clear_hp 00:05:22.214 13:19:27 -- setup/hugepages.sh@37 -- # local node hp 00:05:22.214 13:19:27 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:22.214 13:19:27 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:22.214 13:19:27 -- setup/hugepages.sh@41 -- # echo 0 00:05:22.214 13:19:27 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:22.214 13:19:27 -- setup/hugepages.sh@41 -- # echo 0 00:05:22.214 13:19:27 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:22.214 13:19:27 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:22.214 13:19:27 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:22.214 13:19:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:22.214 13:19:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.214 13:19:27 -- common/autotest_common.sh@10 -- # set +x 00:05:22.214 ************************************ 00:05:22.214 START TEST default_setup 00:05:22.214 ************************************ 00:05:22.214 13:19:27 -- common/autotest_common.sh@1114 -- # default_setup 00:05:22.214 13:19:27 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:22.214 13:19:27 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:22.214 13:19:27 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:22.214 13:19:27 -- setup/hugepages.sh@51 -- # shift 00:05:22.214 13:19:27 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:22.214 13:19:27 -- setup/hugepages.sh@52 -- # local node_ids 00:05:22.214 13:19:27 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:22.214 13:19:27 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:22.214 13:19:27 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:22.214 13:19:27 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:22.214 13:19:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:22.214 13:19:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:22.214 13:19:27 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:22.214 13:19:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:22.214 13:19:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:22.214 13:19:27 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:22.214 13:19:27 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:22.215 13:19:27 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:22.215 13:19:27 -- setup/hugepages.sh@73 -- # return 0 00:05:22.215 13:19:27 -- setup/hugepages.sh@137 -- # setup output 00:05:22.215 13:19:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.215 13:19:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:22.781 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:22.781 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.044 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.044 13:19:28 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:23.044 13:19:28 -- setup/hugepages.sh@89 -- # local node 00:05:23.044 13:19:28 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:23.044 13:19:28 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:23.044 13:19:28 -- setup/hugepages.sh@92 -- # local surp 00:05:23.044 13:19:28 -- setup/hugepages.sh@93 -- # local resv 00:05:23.044 13:19:28 -- setup/hugepages.sh@94 -- # local anon 00:05:23.044 13:19:28 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:23.044 13:19:28 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:23.044 13:19:28 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:23.044 13:19:28 -- setup/common.sh@18 -- # local node= 00:05:23.044 13:19:28 -- setup/common.sh@19 -- # local var val 00:05:23.044 13:19:28 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.044 13:19:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.044 13:19:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.044 13:19:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.044 13:19:28 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.044 13:19:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.044 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.044 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.044 13:19:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6502364 kB' 'MemAvailable: 9432596 kB' 'Buffers: 3704 kB' 'Cached: 3130020 kB' 'SwapCached: 0 kB' 'Active: 497952 kB' 'Inactive: 2754032 kB' 'Active(anon): 128768 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754032 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'AnonPages: 119932 kB' 'Mapped: 51064 kB' 'Shmem: 10492 kB' 'KReclaimable: 88332 kB' 'Slab: 191772 kB' 'SReclaimable: 88332 kB' 'SUnreclaim: 103440 kB' 'KernelStack: 6816 kB' 'PageTables: 4572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 4022272 kB' 'DirectMap1G: 10485760 kB' 00:05:23.044 13:19:28 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.044 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.044 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.044 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.044 13:19:28 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.044 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.044 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.044 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.044 13:19:28 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.044 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.044 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.044 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.044 13:19:28 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.044 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.044 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.044 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.044 13:19:28 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.044 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.044 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.044 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.044 13:19:28 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.044 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.044 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.044 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.044 13:19:28 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.044 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.044 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.044 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.044 13:19:28 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.044 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.044 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.044 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.044 13:19:28 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.044 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.044 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.044 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.044 13:19:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.044 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.044 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.044 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.044 13:19:28 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.044 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.045 13:19:28 -- setup/common.sh@33 -- # echo 0 00:05:23.045 13:19:28 -- setup/common.sh@33 -- # return 0 00:05:23.045 13:19:28 -- setup/hugepages.sh@97 -- # anon=0 00:05:23.045 13:19:28 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:23.045 13:19:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:23.045 13:19:28 -- setup/common.sh@18 -- # local node= 00:05:23.045 13:19:28 -- setup/common.sh@19 -- # local var val 00:05:23.045 13:19:28 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.045 13:19:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.045 13:19:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.045 13:19:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.045 13:19:28 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.045 13:19:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6502364 kB' 'MemAvailable: 9432596 kB' 'Buffers: 3704 kB' 'Cached: 3130020 kB' 'SwapCached: 0 kB' 'Active: 497900 kB' 'Inactive: 2754032 kB' 'Active(anon): 128716 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754032 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'AnonPages: 119916 kB' 'Mapped: 51064 kB' 'Shmem: 10492 kB' 'KReclaimable: 88332 kB' 'Slab: 191768 kB' 'SReclaimable: 88332 kB' 'SUnreclaim: 103436 kB' 'KernelStack: 6800 kB' 'PageTables: 4532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 4022272 kB' 'DirectMap1G: 10485760 kB' 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.045 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.045 13:19:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.046 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.046 13:19:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.047 13:19:28 -- setup/common.sh@33 -- # echo 0 00:05:23.047 13:19:28 -- setup/common.sh@33 -- # return 0 00:05:23.047 13:19:28 -- setup/hugepages.sh@99 -- # surp=0 00:05:23.047 13:19:28 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:23.047 13:19:28 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:23.047 13:19:28 -- setup/common.sh@18 -- # local node= 00:05:23.047 13:19:28 -- setup/common.sh@19 -- # local var val 00:05:23.047 13:19:28 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.047 13:19:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.047 13:19:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.047 13:19:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.047 13:19:28 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.047 13:19:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6502616 kB' 'MemAvailable: 9432848 kB' 'Buffers: 3704 kB' 'Cached: 3130020 kB' 'SwapCached: 0 kB' 'Active: 497540 kB' 'Inactive: 2754032 kB' 'Active(anon): 128356 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754032 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'AnonPages: 119572 kB' 'Mapped: 50856 kB' 'Shmem: 10492 kB' 'KReclaimable: 88332 kB' 'Slab: 191748 kB' 'SReclaimable: 88332 kB' 'SUnreclaim: 103416 kB' 'KernelStack: 6720 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 4022272 kB' 'DirectMap1G: 10485760 kB' 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.047 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.047 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.048 13:19:28 -- setup/common.sh@33 -- # echo 0 00:05:23.048 13:19:28 -- setup/common.sh@33 -- # return 0 00:05:23.048 13:19:28 -- setup/hugepages.sh@100 -- # resv=0 00:05:23.048 nr_hugepages=1024 00:05:23.048 13:19:28 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:23.048 resv_hugepages=0 00:05:23.048 13:19:28 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:23.048 surplus_hugepages=0 00:05:23.048 13:19:28 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:23.048 anon_hugepages=0 00:05:23.048 13:19:28 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:23.048 13:19:28 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:23.048 13:19:28 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:23.048 13:19:28 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:23.048 13:19:28 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:23.048 13:19:28 -- setup/common.sh@18 -- # local node= 00:05:23.048 13:19:28 -- setup/common.sh@19 -- # local var val 00:05:23.048 13:19:28 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.048 13:19:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.048 13:19:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.048 13:19:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.048 13:19:28 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.048 13:19:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.048 13:19:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6502616 kB' 'MemAvailable: 9432848 kB' 'Buffers: 3704 kB' 'Cached: 3130020 kB' 'SwapCached: 0 kB' 'Active: 497688 kB' 'Inactive: 2754032 kB' 'Active(anon): 128504 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754032 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'AnonPages: 119688 kB' 'Mapped: 50856 kB' 'Shmem: 10492 kB' 'KReclaimable: 88332 kB' 'Slab: 191748 kB' 'SReclaimable: 88332 kB' 'SUnreclaim: 103416 kB' 'KernelStack: 6772 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 4022272 kB' 'DirectMap1G: 10485760 kB' 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.048 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.048 13:19:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.049 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.049 13:19:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.050 13:19:28 -- setup/common.sh@33 -- # echo 1024 00:05:23.050 13:19:28 -- setup/common.sh@33 -- # return 0 00:05:23.050 13:19:28 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:23.050 13:19:28 -- setup/hugepages.sh@112 -- # get_nodes 00:05:23.050 13:19:28 -- setup/hugepages.sh@27 -- # local node 00:05:23.050 13:19:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:23.050 13:19:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:23.050 13:19:28 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:23.050 13:19:28 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:23.050 13:19:28 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:23.050 13:19:28 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:23.050 13:19:28 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:23.050 13:19:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:23.050 13:19:28 -- setup/common.sh@18 -- # local node=0 00:05:23.050 13:19:28 -- setup/common.sh@19 -- # local var val 00:05:23.050 13:19:28 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.050 13:19:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.050 13:19:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:23.050 13:19:28 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:23.050 13:19:28 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.050 13:19:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6502368 kB' 'MemUsed: 5736736 kB' 'SwapCached: 0 kB' 'Active: 497764 kB' 'Inactive: 2754032 kB' 'Active(anon): 128580 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754032 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'FilePages: 3133720 kB' 'Mapped: 50740 kB' 'AnonPages: 119716 kB' 'Shmem: 10488 kB' 'KernelStack: 6752 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88304 kB' 'Slab: 191716 kB' 'SReclaimable: 88304 kB' 'SUnreclaim: 103412 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.050 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.050 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.051 13:19:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.051 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.051 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.051 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.051 13:19:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.051 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.051 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.051 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.051 13:19:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.051 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.051 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.051 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.051 13:19:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.051 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.051 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.051 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.051 13:19:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.051 13:19:28 -- setup/common.sh@32 -- # continue 00:05:23.051 13:19:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.051 13:19:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.051 13:19:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.051 13:19:28 -- setup/common.sh@33 -- # echo 0 00:05:23.051 13:19:28 -- setup/common.sh@33 -- # return 0 00:05:23.051 13:19:28 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:23.051 13:19:28 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:23.051 13:19:28 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:23.051 13:19:28 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:23.051 node0=1024 expecting 1024 00:05:23.051 13:19:28 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:23.051 13:19:28 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:23.051 00:05:23.051 real 0m0.969s 00:05:23.051 user 0m0.463s 00:05:23.051 sys 0m0.461s 00:05:23.051 13:19:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:23.051 13:19:28 -- common/autotest_common.sh@10 -- # set +x 00:05:23.051 ************************************ 00:05:23.051 END TEST default_setup 00:05:23.051 ************************************ 00:05:23.310 13:19:28 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:23.310 13:19:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.310 13:19:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.310 13:19:28 -- common/autotest_common.sh@10 -- # set +x 00:05:23.310 ************************************ 00:05:23.310 START TEST per_node_1G_alloc 00:05:23.310 ************************************ 00:05:23.310 13:19:28 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:05:23.310 13:19:28 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:23.310 13:19:28 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:23.310 13:19:28 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:23.310 13:19:28 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:23.310 13:19:28 -- setup/hugepages.sh@51 -- # shift 00:05:23.310 13:19:28 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:23.310 13:19:28 -- setup/hugepages.sh@52 -- # local node_ids 00:05:23.310 13:19:28 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:23.310 13:19:28 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:23.310 13:19:28 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:23.310 13:19:28 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:23.310 13:19:28 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:23.310 13:19:28 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:23.310 13:19:28 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:23.310 13:19:28 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:23.310 13:19:28 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:23.310 13:19:28 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:23.310 13:19:28 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:23.310 13:19:28 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:23.310 13:19:28 -- setup/hugepages.sh@73 -- # return 0 00:05:23.310 13:19:28 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:23.310 13:19:28 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:23.310 13:19:28 -- setup/hugepages.sh@146 -- # setup output 00:05:23.310 13:19:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.310 13:19:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:23.572 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:23.572 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:23.572 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:23.572 13:19:29 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:23.572 13:19:29 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:23.572 13:19:29 -- setup/hugepages.sh@89 -- # local node 00:05:23.572 13:19:29 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:23.572 13:19:29 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:23.572 13:19:29 -- setup/hugepages.sh@92 -- # local surp 00:05:23.572 13:19:29 -- setup/hugepages.sh@93 -- # local resv 00:05:23.572 13:19:29 -- setup/hugepages.sh@94 -- # local anon 00:05:23.572 13:19:29 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:23.572 13:19:29 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:23.572 13:19:29 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:23.572 13:19:29 -- setup/common.sh@18 -- # local node= 00:05:23.572 13:19:29 -- setup/common.sh@19 -- # local var val 00:05:23.572 13:19:29 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.572 13:19:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.572 13:19:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.572 13:19:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.572 13:19:29 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.572 13:19:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.572 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.572 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.572 13:19:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 7558352 kB' 'MemAvailable: 10488588 kB' 'Buffers: 3704 kB' 'Cached: 3130020 kB' 'SwapCached: 0 kB' 'Active: 497956 kB' 'Inactive: 2754052 kB' 'Active(anon): 128772 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'AnonPages: 119932 kB' 'Mapped: 50816 kB' 'Shmem: 10488 kB' 'KReclaimable: 88304 kB' 'Slab: 191732 kB' 'SReclaimable: 88304 kB' 'SUnreclaim: 103428 kB' 'KernelStack: 6744 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 322268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 4022272 kB' 'DirectMap1G: 10485760 kB' 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.573 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.573 13:19:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.573 13:19:29 -- setup/common.sh@33 -- # echo 0 00:05:23.573 13:19:29 -- setup/common.sh@33 -- # return 0 00:05:23.573 13:19:29 -- setup/hugepages.sh@97 -- # anon=0 00:05:23.573 13:19:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:23.573 13:19:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:23.573 13:19:29 -- setup/common.sh@18 -- # local node= 00:05:23.573 13:19:29 -- setup/common.sh@19 -- # local var val 00:05:23.573 13:19:29 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.573 13:19:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.574 13:19:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.574 13:19:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.574 13:19:29 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.574 13:19:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 7558492 kB' 'MemAvailable: 10488728 kB' 'Buffers: 3704 kB' 'Cached: 3130020 kB' 'SwapCached: 0 kB' 'Active: 497844 kB' 'Inactive: 2754052 kB' 'Active(anon): 128660 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'AnonPages: 119788 kB' 'Mapped: 50740 kB' 'Shmem: 10488 kB' 'KReclaimable: 88304 kB' 'Slab: 191736 kB' 'SReclaimable: 88304 kB' 'SUnreclaim: 103432 kB' 'KernelStack: 6720 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 322268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 4022272 kB' 'DirectMap1G: 10485760 kB' 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.574 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.574 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.575 13:19:29 -- setup/common.sh@33 -- # echo 0 00:05:23.575 13:19:29 -- setup/common.sh@33 -- # return 0 00:05:23.575 13:19:29 -- setup/hugepages.sh@99 -- # surp=0 00:05:23.575 13:19:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:23.575 13:19:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:23.575 13:19:29 -- setup/common.sh@18 -- # local node= 00:05:23.575 13:19:29 -- setup/common.sh@19 -- # local var val 00:05:23.575 13:19:29 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.575 13:19:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.575 13:19:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.575 13:19:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.575 13:19:29 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.575 13:19:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 7558492 kB' 'MemAvailable: 10488728 kB' 'Buffers: 3704 kB' 'Cached: 3130020 kB' 'SwapCached: 0 kB' 'Active: 497748 kB' 'Inactive: 2754052 kB' 'Active(anon): 128564 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'AnonPages: 119724 kB' 'Mapped: 50740 kB' 'Shmem: 10488 kB' 'KReclaimable: 88304 kB' 'Slab: 191740 kB' 'SReclaimable: 88304 kB' 'SUnreclaim: 103436 kB' 'KernelStack: 6768 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 322268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 4022272 kB' 'DirectMap1G: 10485760 kB' 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.575 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.575 13:19:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.576 13:19:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.576 13:19:29 -- setup/common.sh@33 -- # echo 0 00:05:23.576 13:19:29 -- setup/common.sh@33 -- # return 0 00:05:23.576 13:19:29 -- setup/hugepages.sh@100 -- # resv=0 00:05:23.576 nr_hugepages=512 00:05:23.576 13:19:29 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:23.576 resv_hugepages=0 00:05:23.576 13:19:29 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:23.576 surplus_hugepages=0 00:05:23.576 13:19:29 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:23.576 anon_hugepages=0 00:05:23.576 13:19:29 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:23.576 13:19:29 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:23.576 13:19:29 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:23.576 13:19:29 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:23.576 13:19:29 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:23.576 13:19:29 -- setup/common.sh@18 -- # local node= 00:05:23.576 13:19:29 -- setup/common.sh@19 -- # local var val 00:05:23.576 13:19:29 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.576 13:19:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.576 13:19:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.576 13:19:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.576 13:19:29 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.576 13:19:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.576 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.576 13:19:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 7558240 kB' 'MemAvailable: 10488476 kB' 'Buffers: 3704 kB' 'Cached: 3130020 kB' 'SwapCached: 0 kB' 'Active: 497720 kB' 'Inactive: 2754052 kB' 'Active(anon): 128536 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'AnonPages: 119692 kB' 'Mapped: 50740 kB' 'Shmem: 10488 kB' 'KReclaimable: 88304 kB' 'Slab: 191736 kB' 'SReclaimable: 88304 kB' 'SUnreclaim: 103432 kB' 'KernelStack: 6752 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 322268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 4022272 kB' 'DirectMap1G: 10485760 kB' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.577 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.577 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.578 13:19:29 -- setup/common.sh@33 -- # echo 512 00:05:23.578 13:19:29 -- setup/common.sh@33 -- # return 0 00:05:23.578 13:19:29 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:23.578 13:19:29 -- setup/hugepages.sh@112 -- # get_nodes 00:05:23.578 13:19:29 -- setup/hugepages.sh@27 -- # local node 00:05:23.578 13:19:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:23.578 13:19:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:23.578 13:19:29 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:23.578 13:19:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:23.578 13:19:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:23.578 13:19:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:23.578 13:19:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:23.578 13:19:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:23.578 13:19:29 -- setup/common.sh@18 -- # local node=0 00:05:23.578 13:19:29 -- setup/common.sh@19 -- # local var val 00:05:23.578 13:19:29 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.578 13:19:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.578 13:19:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:23.578 13:19:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:23.578 13:19:29 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.578 13:19:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 7558240 kB' 'MemUsed: 4680864 kB' 'SwapCached: 0 kB' 'Active: 497632 kB' 'Inactive: 2754052 kB' 'Active(anon): 128448 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'FilePages: 3133724 kB' 'Mapped: 50740 kB' 'AnonPages: 119552 kB' 'Shmem: 10488 kB' 'KernelStack: 6720 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88304 kB' 'Slab: 191732 kB' 'SReclaimable: 88304 kB' 'SUnreclaim: 103428 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.578 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.578 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.579 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.579 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.579 13:19:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.579 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.579 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.579 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.579 13:19:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.579 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.579 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.579 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.579 13:19:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.579 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.579 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.579 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.579 13:19:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.579 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.579 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.579 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.579 13:19:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.579 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.579 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.579 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.579 13:19:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.579 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.579 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.579 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.579 13:19:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.579 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.579 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.579 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.579 13:19:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.579 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.579 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.579 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.579 13:19:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.579 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.579 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.579 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.579 13:19:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.579 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.579 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.579 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.579 13:19:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.579 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.579 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.579 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.579 13:19:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.579 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.579 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.579 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.579 13:19:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.579 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.579 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.579 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.579 13:19:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.579 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.579 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.579 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.579 13:19:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.579 13:19:29 -- setup/common.sh@32 -- # continue 00:05:23.579 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.579 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.579 13:19:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.579 13:19:29 -- setup/common.sh@33 -- # echo 0 00:05:23.579 13:19:29 -- setup/common.sh@33 -- # return 0 00:05:23.579 13:19:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:23.579 13:19:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:23.579 13:19:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:23.579 13:19:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:23.579 node0=512 expecting 512 00:05:23.579 13:19:29 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:23.579 13:19:29 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:23.579 00:05:23.579 real 0m0.514s 00:05:23.579 user 0m0.259s 00:05:23.579 sys 0m0.289s 00:05:23.579 13:19:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:23.579 13:19:29 -- common/autotest_common.sh@10 -- # set +x 00:05:23.579 ************************************ 00:05:23.579 END TEST per_node_1G_alloc 00:05:23.579 ************************************ 00:05:23.838 13:19:29 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:23.838 13:19:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.838 13:19:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.838 13:19:29 -- common/autotest_common.sh@10 -- # set +x 00:05:23.838 ************************************ 00:05:23.838 START TEST even_2G_alloc 00:05:23.838 ************************************ 00:05:23.838 13:19:29 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:05:23.838 13:19:29 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:23.838 13:19:29 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:23.838 13:19:29 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:23.838 13:19:29 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:23.838 13:19:29 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:23.838 13:19:29 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:23.838 13:19:29 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:23.838 13:19:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:23.838 13:19:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:23.838 13:19:29 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:23.838 13:19:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:23.838 13:19:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:23.838 13:19:29 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:23.838 13:19:29 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:23.838 13:19:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:23.838 13:19:29 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:23.838 13:19:29 -- setup/hugepages.sh@83 -- # : 0 00:05:23.838 13:19:29 -- setup/hugepages.sh@84 -- # : 0 00:05:23.838 13:19:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:23.838 13:19:29 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:23.838 13:19:29 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:23.838 13:19:29 -- setup/hugepages.sh@153 -- # setup output 00:05:23.838 13:19:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.838 13:19:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:24.102 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:24.102 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:24.102 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:24.102 13:19:29 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:24.102 13:19:29 -- setup/hugepages.sh@89 -- # local node 00:05:24.102 13:19:29 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:24.102 13:19:29 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:24.102 13:19:29 -- setup/hugepages.sh@92 -- # local surp 00:05:24.102 13:19:29 -- setup/hugepages.sh@93 -- # local resv 00:05:24.102 13:19:29 -- setup/hugepages.sh@94 -- # local anon 00:05:24.102 13:19:29 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:24.102 13:19:29 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:24.102 13:19:29 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:24.102 13:19:29 -- setup/common.sh@18 -- # local node= 00:05:24.102 13:19:29 -- setup/common.sh@19 -- # local var val 00:05:24.102 13:19:29 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.102 13:19:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.102 13:19:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.102 13:19:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.102 13:19:29 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.102 13:19:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 13:19:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6511556 kB' 'MemAvailable: 9441792 kB' 'Buffers: 3704 kB' 'Cached: 3130020 kB' 'SwapCached: 0 kB' 'Active: 498044 kB' 'Inactive: 2754052 kB' 'Active(anon): 128860 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'AnonPages: 120012 kB' 'Mapped: 50852 kB' 'Shmem: 10488 kB' 'KReclaimable: 88304 kB' 'Slab: 191784 kB' 'SReclaimable: 88304 kB' 'SUnreclaim: 103480 kB' 'KernelStack: 6712 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 4022272 kB' 'DirectMap1G: 10485760 kB' 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.102 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.102 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.103 13:19:29 -- setup/common.sh@33 -- # echo 0 00:05:24.103 13:19:29 -- setup/common.sh@33 -- # return 0 00:05:24.103 13:19:29 -- setup/hugepages.sh@97 -- # anon=0 00:05:24.103 13:19:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:24.103 13:19:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.103 13:19:29 -- setup/common.sh@18 -- # local node= 00:05:24.103 13:19:29 -- setup/common.sh@19 -- # local var val 00:05:24.103 13:19:29 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.103 13:19:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.103 13:19:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.103 13:19:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.103 13:19:29 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.103 13:19:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6511852 kB' 'MemAvailable: 9442088 kB' 'Buffers: 3704 kB' 'Cached: 3130020 kB' 'SwapCached: 0 kB' 'Active: 497788 kB' 'Inactive: 2754052 kB' 'Active(anon): 128604 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'AnonPages: 119688 kB' 'Mapped: 50848 kB' 'Shmem: 10488 kB' 'KReclaimable: 88304 kB' 'Slab: 191812 kB' 'SReclaimable: 88304 kB' 'SUnreclaim: 103508 kB' 'KernelStack: 6712 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 4022272 kB' 'DirectMap1G: 10485760 kB' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.103 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.103 13:19:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.104 13:19:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.104 13:19:29 -- setup/common.sh@33 -- # echo 0 00:05:24.104 13:19:29 -- setup/common.sh@33 -- # return 0 00:05:24.104 13:19:29 -- setup/hugepages.sh@99 -- # surp=0 00:05:24.104 13:19:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:24.104 13:19:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:24.104 13:19:29 -- setup/common.sh@18 -- # local node= 00:05:24.104 13:19:29 -- setup/common.sh@19 -- # local var val 00:05:24.104 13:19:29 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.104 13:19:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.104 13:19:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.104 13:19:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.104 13:19:29 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.104 13:19:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.104 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6511852 kB' 'MemAvailable: 9442088 kB' 'Buffers: 3704 kB' 'Cached: 3130020 kB' 'SwapCached: 0 kB' 'Active: 497748 kB' 'Inactive: 2754052 kB' 'Active(anon): 128564 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'AnonPages: 119652 kB' 'Mapped: 50848 kB' 'Shmem: 10488 kB' 'KReclaimable: 88304 kB' 'Slab: 191812 kB' 'SReclaimable: 88304 kB' 'SUnreclaim: 103508 kB' 'KernelStack: 6696 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 4022272 kB' 'DirectMap1G: 10485760 kB' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.105 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.105 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.106 13:19:29 -- setup/common.sh@33 -- # echo 0 00:05:24.106 13:19:29 -- setup/common.sh@33 -- # return 0 00:05:24.106 13:19:29 -- setup/hugepages.sh@100 -- # resv=0 00:05:24.106 nr_hugepages=1024 00:05:24.106 13:19:29 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:24.106 resv_hugepages=0 00:05:24.106 13:19:29 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:24.106 surplus_hugepages=0 00:05:24.106 13:19:29 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:24.106 anon_hugepages=0 00:05:24.106 13:19:29 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:24.106 13:19:29 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:24.106 13:19:29 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:24.106 13:19:29 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:24.106 13:19:29 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:24.106 13:19:29 -- setup/common.sh@18 -- # local node= 00:05:24.106 13:19:29 -- setup/common.sh@19 -- # local var val 00:05:24.106 13:19:29 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.106 13:19:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.106 13:19:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.106 13:19:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.106 13:19:29 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.106 13:19:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 13:19:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6511600 kB' 'MemAvailable: 9441836 kB' 'Buffers: 3704 kB' 'Cached: 3130020 kB' 'SwapCached: 0 kB' 'Active: 497736 kB' 'Inactive: 2754052 kB' 'Active(anon): 128552 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'AnonPages: 119600 kB' 'Mapped: 50740 kB' 'Shmem: 10488 kB' 'KReclaimable: 88304 kB' 'Slab: 191820 kB' 'SReclaimable: 88304 kB' 'SUnreclaim: 103516 kB' 'KernelStack: 6704 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 4022272 kB' 'DirectMap1G: 10485760 kB' 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.106 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.106 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.107 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.107 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.107 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.107 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.107 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.107 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.107 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.107 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.107 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.107 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.107 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.107 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.107 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.107 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.107 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.107 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.107 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.107 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.367 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.367 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.367 13:19:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.367 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.367 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.367 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.367 13:19:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.367 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.367 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.367 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.367 13:19:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.367 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.367 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.367 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.367 13:19:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.367 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.367 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.367 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.367 13:19:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.367 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.367 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.368 13:19:29 -- setup/common.sh@33 -- # echo 1024 00:05:24.368 13:19:29 -- setup/common.sh@33 -- # return 0 00:05:24.368 13:19:29 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:24.368 13:19:29 -- setup/hugepages.sh@112 -- # get_nodes 00:05:24.368 13:19:29 -- setup/hugepages.sh@27 -- # local node 00:05:24.368 13:19:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:24.368 13:19:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:24.368 13:19:29 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:24.368 13:19:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:24.368 13:19:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:24.368 13:19:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:24.368 13:19:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:24.368 13:19:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.368 13:19:29 -- setup/common.sh@18 -- # local node=0 00:05:24.368 13:19:29 -- setup/common.sh@19 -- # local var val 00:05:24.368 13:19:29 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.368 13:19:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.368 13:19:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:24.368 13:19:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:24.368 13:19:29 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.368 13:19:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6511852 kB' 'MemUsed: 5727252 kB' 'SwapCached: 0 kB' 'Active: 497736 kB' 'Inactive: 2754052 kB' 'Active(anon): 128552 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'FilePages: 3133724 kB' 'Mapped: 50740 kB' 'AnonPages: 119632 kB' 'Shmem: 10488 kB' 'KernelStack: 6772 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88304 kB' 'Slab: 191820 kB' 'SReclaimable: 88304 kB' 'SUnreclaim: 103516 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.368 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.368 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # continue 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.369 13:19:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.369 13:19:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.369 13:19:29 -- setup/common.sh@33 -- # echo 0 00:05:24.369 13:19:29 -- setup/common.sh@33 -- # return 0 00:05:24.369 13:19:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:24.369 13:19:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:24.369 13:19:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:24.369 13:19:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:24.369 node0=1024 expecting 1024 00:05:24.369 13:19:29 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:24.369 13:19:29 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:24.369 00:05:24.369 real 0m0.516s 00:05:24.369 user 0m0.281s 00:05:24.369 sys 0m0.270s 00:05:24.369 13:19:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:24.369 13:19:29 -- common/autotest_common.sh@10 -- # set +x 00:05:24.369 ************************************ 00:05:24.369 END TEST even_2G_alloc 00:05:24.369 ************************************ 00:05:24.369 13:19:29 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:24.369 13:19:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:24.369 13:19:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.369 13:19:29 -- common/autotest_common.sh@10 -- # set +x 00:05:24.369 ************************************ 00:05:24.369 START TEST odd_alloc 00:05:24.369 ************************************ 00:05:24.369 13:19:29 -- common/autotest_common.sh@1114 -- # odd_alloc 00:05:24.369 13:19:29 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:24.369 13:19:29 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:24.369 13:19:29 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:24.369 13:19:29 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:24.369 13:19:29 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:24.369 13:19:29 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:24.369 13:19:29 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:24.369 13:19:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:24.369 13:19:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:24.369 13:19:29 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:24.369 13:19:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:24.369 13:19:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:24.369 13:19:29 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:24.369 13:19:29 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:24.369 13:19:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:24.369 13:19:29 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:24.369 13:19:29 -- setup/hugepages.sh@83 -- # : 0 00:05:24.369 13:19:29 -- setup/hugepages.sh@84 -- # : 0 00:05:24.369 13:19:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:24.369 13:19:29 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:24.369 13:19:29 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:24.369 13:19:29 -- setup/hugepages.sh@160 -- # setup output 00:05:24.369 13:19:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.369 13:19:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:24.631 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:24.631 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:24.631 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:24.631 13:19:30 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:24.631 13:19:30 -- setup/hugepages.sh@89 -- # local node 00:05:24.631 13:19:30 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:24.631 13:19:30 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:24.631 13:19:30 -- setup/hugepages.sh@92 -- # local surp 00:05:24.631 13:19:30 -- setup/hugepages.sh@93 -- # local resv 00:05:24.631 13:19:30 -- setup/hugepages.sh@94 -- # local anon 00:05:24.631 13:19:30 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:24.631 13:19:30 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:24.631 13:19:30 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:24.631 13:19:30 -- setup/common.sh@18 -- # local node= 00:05:24.631 13:19:30 -- setup/common.sh@19 -- # local var val 00:05:24.631 13:19:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.631 13:19:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.631 13:19:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.631 13:19:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.631 13:19:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.631 13:19:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.631 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.631 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.631 13:19:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6512624 kB' 'MemAvailable: 9442860 kB' 'Buffers: 3704 kB' 'Cached: 3130020 kB' 'SwapCached: 0 kB' 'Active: 497936 kB' 'Inactive: 2754052 kB' 'Active(anon): 128752 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'AnonPages: 119920 kB' 'Mapped: 50908 kB' 'Shmem: 10488 kB' 'KReclaimable: 88304 kB' 'Slab: 191832 kB' 'SReclaimable: 88304 kB' 'SUnreclaim: 103528 kB' 'KernelStack: 6744 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 322268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 4022272 kB' 'DirectMap1G: 10485760 kB' 00:05:24.631 13:19:30 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.631 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.631 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.631 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.631 13:19:30 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.631 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.631 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.631 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.631 13:19:30 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.631 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.631 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.631 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.631 13:19:30 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.631 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.631 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.631 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.631 13:19:30 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.631 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.631 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.631 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.631 13:19:30 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.631 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.631 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.631 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.631 13:19:30 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.631 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.631 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.631 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.631 13:19:30 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.631 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.631 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.631 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.631 13:19:30 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.631 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.631 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.631 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.631 13:19:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.631 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.631 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.631 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.631 13:19:30 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.631 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.631 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.631 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.632 13:19:30 -- setup/common.sh@33 -- # echo 0 00:05:24.632 13:19:30 -- setup/common.sh@33 -- # return 0 00:05:24.632 13:19:30 -- setup/hugepages.sh@97 -- # anon=0 00:05:24.632 13:19:30 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:24.632 13:19:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.632 13:19:30 -- setup/common.sh@18 -- # local node= 00:05:24.632 13:19:30 -- setup/common.sh@19 -- # local var val 00:05:24.632 13:19:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.632 13:19:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.632 13:19:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.632 13:19:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.632 13:19:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.632 13:19:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6512372 kB' 'MemAvailable: 9442608 kB' 'Buffers: 3704 kB' 'Cached: 3130020 kB' 'SwapCached: 0 kB' 'Active: 497860 kB' 'Inactive: 2754052 kB' 'Active(anon): 128676 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'AnonPages: 119760 kB' 'Mapped: 50908 kB' 'Shmem: 10488 kB' 'KReclaimable: 88304 kB' 'Slab: 191844 kB' 'SReclaimable: 88304 kB' 'SUnreclaim: 103540 kB' 'KernelStack: 6744 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 322268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 4022272 kB' 'DirectMap1G: 10485760 kB' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.632 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.632 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.633 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.633 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.634 13:19:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.634 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.634 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.634 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.634 13:19:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.634 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.634 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.634 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.634 13:19:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.634 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.634 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.634 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.634 13:19:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.634 13:19:30 -- setup/common.sh@33 -- # echo 0 00:05:24.634 13:19:30 -- setup/common.sh@33 -- # return 0 00:05:24.634 13:19:30 -- setup/hugepages.sh@99 -- # surp=0 00:05:24.634 13:19:30 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:24.634 13:19:30 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:24.634 13:19:30 -- setup/common.sh@18 -- # local node= 00:05:24.634 13:19:30 -- setup/common.sh@19 -- # local var val 00:05:24.634 13:19:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.634 13:19:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.634 13:19:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.634 13:19:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.634 13:19:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.634 13:19:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.634 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.634 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.634 13:19:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6512120 kB' 'MemAvailable: 9442356 kB' 'Buffers: 3704 kB' 'Cached: 3130020 kB' 'SwapCached: 0 kB' 'Active: 497792 kB' 'Inactive: 2754052 kB' 'Active(anon): 128608 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'AnonPages: 119696 kB' 'Mapped: 50740 kB' 'Shmem: 10488 kB' 'KReclaimable: 88304 kB' 'Slab: 191856 kB' 'SReclaimable: 88304 kB' 'SUnreclaim: 103552 kB' 'KernelStack: 6752 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 322268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 4022272 kB' 'DirectMap1G: 10485760 kB' 00:05:24.634 13:19:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.634 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.634 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.634 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.634 13:19:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.634 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.634 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.634 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.634 13:19:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.634 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.634 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.895 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.895 13:19:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.895 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.895 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.895 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.895 13:19:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.895 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.895 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.895 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.895 13:19:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.895 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.895 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.895 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.895 13:19:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.895 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.895 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.895 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.895 13:19:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.895 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.895 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.895 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.895 13:19:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.895 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.895 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.895 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.895 13:19:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.895 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.895 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.895 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.895 13:19:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.895 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.895 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.896 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.896 13:19:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.896 13:19:30 -- setup/common.sh@33 -- # echo 0 00:05:24.896 13:19:30 -- setup/common.sh@33 -- # return 0 00:05:24.896 13:19:30 -- setup/hugepages.sh@100 -- # resv=0 00:05:24.896 nr_hugepages=1025 00:05:24.896 13:19:30 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:24.896 resv_hugepages=0 00:05:24.896 13:19:30 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:24.896 surplus_hugepages=0 00:05:24.896 13:19:30 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:24.896 anon_hugepages=0 00:05:24.896 13:19:30 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:24.896 13:19:30 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:24.896 13:19:30 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:24.897 13:19:30 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:24.897 13:19:30 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:24.897 13:19:30 -- setup/common.sh@18 -- # local node= 00:05:24.897 13:19:30 -- setup/common.sh@19 -- # local var val 00:05:24.897 13:19:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.897 13:19:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.897 13:19:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.897 13:19:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.897 13:19:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.897 13:19:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6512120 kB' 'MemAvailable: 9442356 kB' 'Buffers: 3704 kB' 'Cached: 3130020 kB' 'SwapCached: 0 kB' 'Active: 497508 kB' 'Inactive: 2754052 kB' 'Active(anon): 128324 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'AnonPages: 119448 kB' 'Mapped: 50740 kB' 'Shmem: 10488 kB' 'KReclaimable: 88304 kB' 'Slab: 191840 kB' 'SReclaimable: 88304 kB' 'SUnreclaim: 103536 kB' 'KernelStack: 6752 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 322268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 4022272 kB' 'DirectMap1G: 10485760 kB' 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.897 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.897 13:19:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.898 13:19:30 -- setup/common.sh@33 -- # echo 1025 00:05:24.898 13:19:30 -- setup/common.sh@33 -- # return 0 00:05:24.898 13:19:30 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:24.898 13:19:30 -- setup/hugepages.sh@112 -- # get_nodes 00:05:24.898 13:19:30 -- setup/hugepages.sh@27 -- # local node 00:05:24.898 13:19:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:24.898 13:19:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:24.898 13:19:30 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:24.898 13:19:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:24.898 13:19:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:24.898 13:19:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:24.898 13:19:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:24.898 13:19:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.898 13:19:30 -- setup/common.sh@18 -- # local node=0 00:05:24.898 13:19:30 -- setup/common.sh@19 -- # local var val 00:05:24.898 13:19:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.898 13:19:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.898 13:19:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:24.898 13:19:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:24.898 13:19:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.898 13:19:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6512120 kB' 'MemUsed: 5726984 kB' 'SwapCached: 0 kB' 'Active: 497716 kB' 'Inactive: 2754052 kB' 'Active(anon): 128532 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'FilePages: 3133724 kB' 'Mapped: 50740 kB' 'AnonPages: 119656 kB' 'Shmem: 10488 kB' 'KernelStack: 6736 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88304 kB' 'Slab: 191840 kB' 'SReclaimable: 88304 kB' 'SUnreclaim: 103536 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.898 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.898 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # continue 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.899 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.899 13:19:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.899 13:19:30 -- setup/common.sh@33 -- # echo 0 00:05:24.899 13:19:30 -- setup/common.sh@33 -- # return 0 00:05:24.899 13:19:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:24.899 13:19:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:24.899 13:19:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:24.899 13:19:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:24.899 node0=1025 expecting 1025 00:05:24.899 13:19:30 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:24.899 13:19:30 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:24.899 00:05:24.899 real 0m0.525s 00:05:24.899 user 0m0.259s 00:05:24.899 sys 0m0.303s 00:05:24.899 13:19:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:24.899 13:19:30 -- common/autotest_common.sh@10 -- # set +x 00:05:24.899 ************************************ 00:05:24.899 END TEST odd_alloc 00:05:24.899 ************************************ 00:05:24.899 13:19:30 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:24.899 13:19:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:24.899 13:19:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.899 13:19:30 -- common/autotest_common.sh@10 -- # set +x 00:05:24.899 ************************************ 00:05:24.899 START TEST custom_alloc 00:05:24.899 ************************************ 00:05:24.899 13:19:30 -- common/autotest_common.sh@1114 -- # custom_alloc 00:05:24.899 13:19:30 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:24.899 13:19:30 -- setup/hugepages.sh@169 -- # local node 00:05:24.899 13:19:30 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:24.899 13:19:30 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:24.899 13:19:30 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:24.899 13:19:30 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:24.899 13:19:30 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:24.899 13:19:30 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:24.899 13:19:30 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:24.899 13:19:30 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:24.899 13:19:30 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:24.899 13:19:30 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:24.899 13:19:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:24.899 13:19:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:24.899 13:19:30 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:24.899 13:19:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:24.899 13:19:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:24.899 13:19:30 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:24.899 13:19:30 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:24.899 13:19:30 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:24.899 13:19:30 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:24.899 13:19:30 -- setup/hugepages.sh@83 -- # : 0 00:05:24.899 13:19:30 -- setup/hugepages.sh@84 -- # : 0 00:05:24.899 13:19:30 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:24.899 13:19:30 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:24.899 13:19:30 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:24.899 13:19:30 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:24.899 13:19:30 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:24.899 13:19:30 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:24.899 13:19:30 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:24.899 13:19:30 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:24.899 13:19:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:24.899 13:19:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:24.899 13:19:30 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:24.899 13:19:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:24.899 13:19:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:24.899 13:19:30 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:24.899 13:19:30 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:24.899 13:19:30 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:24.899 13:19:30 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:24.899 13:19:30 -- setup/hugepages.sh@78 -- # return 0 00:05:24.899 13:19:30 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:24.899 13:19:30 -- setup/hugepages.sh@187 -- # setup output 00:05:24.899 13:19:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.899 13:19:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:25.159 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:25.159 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:25.159 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:25.159 13:19:30 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:25.159 13:19:30 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:25.159 13:19:30 -- setup/hugepages.sh@89 -- # local node 00:05:25.159 13:19:30 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:25.159 13:19:30 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:25.159 13:19:30 -- setup/hugepages.sh@92 -- # local surp 00:05:25.159 13:19:30 -- setup/hugepages.sh@93 -- # local resv 00:05:25.159 13:19:30 -- setup/hugepages.sh@94 -- # local anon 00:05:25.159 13:19:30 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:25.159 13:19:30 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:25.159 13:19:30 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:25.159 13:19:30 -- setup/common.sh@18 -- # local node= 00:05:25.159 13:19:30 -- setup/common.sh@19 -- # local var val 00:05:25.159 13:19:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:25.159 13:19:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.159 13:19:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.159 13:19:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.159 13:19:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.159 13:19:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.159 13:19:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 7561420 kB' 'MemAvailable: 10491656 kB' 'Buffers: 3704 kB' 'Cached: 3130020 kB' 'SwapCached: 0 kB' 'Active: 498100 kB' 'Inactive: 2754052 kB' 'Active(anon): 128916 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120024 kB' 'Mapped: 50868 kB' 'Shmem: 10488 kB' 'KReclaimable: 88304 kB' 'Slab: 191800 kB' 'SReclaimable: 88304 kB' 'SUnreclaim: 103496 kB' 'KernelStack: 6776 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 322268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 4022272 kB' 'DirectMap1G: 10485760 kB' 00:05:25.159 13:19:30 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.159 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.159 13:19:30 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.159 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.159 13:19:30 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.159 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.159 13:19:30 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.159 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.159 13:19:30 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.159 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.159 13:19:30 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.159 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.159 13:19:30 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.159 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.159 13:19:30 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.159 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.159 13:19:30 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.159 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.159 13:19:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.159 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.159 13:19:30 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.159 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.159 13:19:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.159 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.159 13:19:30 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.159 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.159 13:19:30 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.159 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.159 13:19:30 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.159 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.159 13:19:30 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.159 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.159 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.160 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.160 13:19:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.160 13:19:30 -- setup/common.sh@33 -- # echo 0 00:05:25.160 13:19:30 -- setup/common.sh@33 -- # return 0 00:05:25.422 13:19:30 -- setup/hugepages.sh@97 -- # anon=0 00:05:25.422 13:19:30 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:25.422 13:19:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:25.422 13:19:30 -- setup/common.sh@18 -- # local node= 00:05:25.422 13:19:30 -- setup/common.sh@19 -- # local var val 00:05:25.422 13:19:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:25.422 13:19:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.422 13:19:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.422 13:19:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.422 13:19:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.422 13:19:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 7561580 kB' 'MemAvailable: 10491816 kB' 'Buffers: 3704 kB' 'Cached: 3130020 kB' 'SwapCached: 0 kB' 'Active: 497776 kB' 'Inactive: 2754052 kB' 'Active(anon): 128592 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119676 kB' 'Mapped: 50868 kB' 'Shmem: 10488 kB' 'KReclaimable: 88304 kB' 'Slab: 191804 kB' 'SReclaimable: 88304 kB' 'SUnreclaim: 103500 kB' 'KernelStack: 6728 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 322268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 4022272 kB' 'DirectMap1G: 10485760 kB' 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.422 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.422 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.423 13:19:30 -- setup/common.sh@33 -- # echo 0 00:05:25.423 13:19:30 -- setup/common.sh@33 -- # return 0 00:05:25.423 13:19:30 -- setup/hugepages.sh@99 -- # surp=0 00:05:25.423 13:19:30 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:25.423 13:19:30 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:25.423 13:19:30 -- setup/common.sh@18 -- # local node= 00:05:25.423 13:19:30 -- setup/common.sh@19 -- # local var val 00:05:25.423 13:19:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:25.423 13:19:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.423 13:19:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.423 13:19:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.423 13:19:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.423 13:19:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 7561580 kB' 'MemAvailable: 10491816 kB' 'Buffers: 3704 kB' 'Cached: 3130020 kB' 'SwapCached: 0 kB' 'Active: 497704 kB' 'Inactive: 2754052 kB' 'Active(anon): 128520 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119596 kB' 'Mapped: 50740 kB' 'Shmem: 10488 kB' 'KReclaimable: 88304 kB' 'Slab: 191804 kB' 'SReclaimable: 88304 kB' 'SUnreclaim: 103500 kB' 'KernelStack: 6752 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 322268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 4022272 kB' 'DirectMap1G: 10485760 kB' 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.423 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.423 13:19:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.424 13:19:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.424 13:19:30 -- setup/common.sh@33 -- # echo 0 00:05:25.424 13:19:30 -- setup/common.sh@33 -- # return 0 00:05:25.424 13:19:30 -- setup/hugepages.sh@100 -- # resv=0 00:05:25.424 nr_hugepages=512 00:05:25.424 13:19:30 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:25.424 resv_hugepages=0 00:05:25.424 13:19:30 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:25.424 surplus_hugepages=0 00:05:25.424 13:19:30 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:25.424 anon_hugepages=0 00:05:25.424 13:19:30 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:25.424 13:19:30 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:25.424 13:19:30 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:25.424 13:19:30 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:25.424 13:19:30 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:25.424 13:19:30 -- setup/common.sh@18 -- # local node= 00:05:25.424 13:19:30 -- setup/common.sh@19 -- # local var val 00:05:25.424 13:19:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:25.424 13:19:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.424 13:19:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.424 13:19:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.424 13:19:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.424 13:19:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.424 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 7561580 kB' 'MemAvailable: 10491816 kB' 'Buffers: 3704 kB' 'Cached: 3130020 kB' 'SwapCached: 0 kB' 'Active: 497704 kB' 'Inactive: 2754052 kB' 'Active(anon): 128520 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119596 kB' 'Mapped: 50740 kB' 'Shmem: 10488 kB' 'KReclaimable: 88304 kB' 'Slab: 191804 kB' 'SReclaimable: 88304 kB' 'SUnreclaim: 103500 kB' 'KernelStack: 6752 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 322268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 4022272 kB' 'DirectMap1G: 10485760 kB' 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.425 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.425 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.426 13:19:30 -- setup/common.sh@33 -- # echo 512 00:05:25.426 13:19:30 -- setup/common.sh@33 -- # return 0 00:05:25.426 13:19:30 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:25.426 13:19:30 -- setup/hugepages.sh@112 -- # get_nodes 00:05:25.426 13:19:30 -- setup/hugepages.sh@27 -- # local node 00:05:25.426 13:19:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:25.426 13:19:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:25.426 13:19:30 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:25.426 13:19:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:25.426 13:19:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:25.426 13:19:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:25.426 13:19:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:25.426 13:19:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:25.426 13:19:30 -- setup/common.sh@18 -- # local node=0 00:05:25.426 13:19:30 -- setup/common.sh@19 -- # local var val 00:05:25.426 13:19:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:25.426 13:19:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.426 13:19:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:25.426 13:19:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:25.426 13:19:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.426 13:19:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.426 13:19:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 7561580 kB' 'MemUsed: 4677524 kB' 'SwapCached: 0 kB' 'Active: 497864 kB' 'Inactive: 2754052 kB' 'Active(anon): 128680 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3133724 kB' 'Mapped: 50740 kB' 'AnonPages: 119756 kB' 'Shmem: 10488 kB' 'KernelStack: 6720 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88304 kB' 'Slab: 191804 kB' 'SReclaimable: 88304 kB' 'SUnreclaim: 103500 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.426 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.426 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # continue 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.427 13:19:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.427 13:19:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.427 13:19:30 -- setup/common.sh@33 -- # echo 0 00:05:25.427 13:19:30 -- setup/common.sh@33 -- # return 0 00:05:25.427 13:19:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:25.427 13:19:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:25.427 13:19:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:25.427 13:19:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:25.427 node0=512 expecting 512 00:05:25.427 13:19:30 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:25.427 13:19:30 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:25.427 00:05:25.427 real 0m0.510s 00:05:25.427 user 0m0.260s 00:05:25.427 sys 0m0.285s 00:05:25.427 13:19:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:25.427 13:19:30 -- common/autotest_common.sh@10 -- # set +x 00:05:25.427 ************************************ 00:05:25.427 END TEST custom_alloc 00:05:25.427 ************************************ 00:05:25.427 13:19:30 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:25.427 13:19:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:25.427 13:19:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.427 13:19:30 -- common/autotest_common.sh@10 -- # set +x 00:05:25.427 ************************************ 00:05:25.427 START TEST no_shrink_alloc 00:05:25.427 ************************************ 00:05:25.427 13:19:31 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:05:25.427 13:19:31 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:25.427 13:19:31 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:25.427 13:19:31 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:25.427 13:19:31 -- setup/hugepages.sh@51 -- # shift 00:05:25.427 13:19:31 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:25.427 13:19:31 -- setup/hugepages.sh@52 -- # local node_ids 00:05:25.427 13:19:31 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:25.427 13:19:31 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:25.427 13:19:31 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:25.427 13:19:31 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:25.427 13:19:31 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:25.427 13:19:31 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:25.427 13:19:31 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:25.427 13:19:31 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:25.427 13:19:31 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:25.427 13:19:31 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:25.427 13:19:31 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:25.427 13:19:31 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:25.427 13:19:31 -- setup/hugepages.sh@73 -- # return 0 00:05:25.427 13:19:31 -- setup/hugepages.sh@198 -- # setup output 00:05:25.427 13:19:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.427 13:19:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:25.686 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:25.686 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:25.686 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:25.686 13:19:31 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:25.686 13:19:31 -- setup/hugepages.sh@89 -- # local node 00:05:25.686 13:19:31 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:25.686 13:19:31 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:25.686 13:19:31 -- setup/hugepages.sh@92 -- # local surp 00:05:25.686 13:19:31 -- setup/hugepages.sh@93 -- # local resv 00:05:25.686 13:19:31 -- setup/hugepages.sh@94 -- # local anon 00:05:25.949 13:19:31 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:25.949 13:19:31 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:25.949 13:19:31 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:25.949 13:19:31 -- setup/common.sh@18 -- # local node= 00:05:25.949 13:19:31 -- setup/common.sh@19 -- # local var val 00:05:25.949 13:19:31 -- setup/common.sh@20 -- # local mem_f mem 00:05:25.949 13:19:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.949 13:19:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.949 13:19:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.949 13:19:31 -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.949 13:19:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.949 13:19:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6512228 kB' 'MemAvailable: 9442464 kB' 'Buffers: 3704 kB' 'Cached: 3130020 kB' 'SwapCached: 0 kB' 'Active: 498476 kB' 'Inactive: 2754052 kB' 'Active(anon): 129292 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120376 kB' 'Mapped: 50764 kB' 'Shmem: 10488 kB' 'KReclaimable: 88304 kB' 'Slab: 191776 kB' 'SReclaimable: 88304 kB' 'SUnreclaim: 103472 kB' 'KernelStack: 6792 kB' 'PageTables: 4584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 4022272 kB' 'DirectMap1G: 10485760 kB' 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.949 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.949 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.950 13:19:31 -- setup/common.sh@33 -- # echo 0 00:05:25.950 13:19:31 -- setup/common.sh@33 -- # return 0 00:05:25.950 13:19:31 -- setup/hugepages.sh@97 -- # anon=0 00:05:25.950 13:19:31 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:25.950 13:19:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:25.950 13:19:31 -- setup/common.sh@18 -- # local node= 00:05:25.950 13:19:31 -- setup/common.sh@19 -- # local var val 00:05:25.950 13:19:31 -- setup/common.sh@20 -- # local mem_f mem 00:05:25.950 13:19:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.950 13:19:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.950 13:19:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.950 13:19:31 -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.950 13:19:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6512472 kB' 'MemAvailable: 9442708 kB' 'Buffers: 3704 kB' 'Cached: 3130020 kB' 'SwapCached: 0 kB' 'Active: 497840 kB' 'Inactive: 2754052 kB' 'Active(anon): 128656 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119792 kB' 'Mapped: 50740 kB' 'Shmem: 10488 kB' 'KReclaimable: 88304 kB' 'Slab: 191788 kB' 'SReclaimable: 88304 kB' 'SUnreclaim: 103484 kB' 'KernelStack: 6768 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 4022272 kB' 'DirectMap1G: 10485760 kB' 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.950 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.950 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.951 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.951 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.952 13:19:31 -- setup/common.sh@33 -- # echo 0 00:05:25.952 13:19:31 -- setup/common.sh@33 -- # return 0 00:05:25.952 13:19:31 -- setup/hugepages.sh@99 -- # surp=0 00:05:25.952 13:19:31 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:25.952 13:19:31 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:25.952 13:19:31 -- setup/common.sh@18 -- # local node= 00:05:25.952 13:19:31 -- setup/common.sh@19 -- # local var val 00:05:25.952 13:19:31 -- setup/common.sh@20 -- # local mem_f mem 00:05:25.952 13:19:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.952 13:19:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.952 13:19:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.952 13:19:31 -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.952 13:19:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6512472 kB' 'MemAvailable: 9442708 kB' 'Buffers: 3704 kB' 'Cached: 3130020 kB' 'SwapCached: 0 kB' 'Active: 497840 kB' 'Inactive: 2754052 kB' 'Active(anon): 128656 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119784 kB' 'Mapped: 50740 kB' 'Shmem: 10488 kB' 'KReclaimable: 88304 kB' 'Slab: 191788 kB' 'SReclaimable: 88304 kB' 'SUnreclaim: 103484 kB' 'KernelStack: 6768 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 4022272 kB' 'DirectMap1G: 10485760 kB' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.952 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.952 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.953 13:19:31 -- setup/common.sh@33 -- # echo 0 00:05:25.953 13:19:31 -- setup/common.sh@33 -- # return 0 00:05:25.953 13:19:31 -- setup/hugepages.sh@100 -- # resv=0 00:05:25.953 nr_hugepages=1024 00:05:25.953 13:19:31 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:25.953 resv_hugepages=0 00:05:25.953 13:19:31 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:25.953 surplus_hugepages=0 00:05:25.953 13:19:31 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:25.953 anon_hugepages=0 00:05:25.953 13:19:31 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:25.953 13:19:31 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:25.953 13:19:31 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:25.953 13:19:31 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:25.953 13:19:31 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:25.953 13:19:31 -- setup/common.sh@18 -- # local node= 00:05:25.953 13:19:31 -- setup/common.sh@19 -- # local var val 00:05:25.953 13:19:31 -- setup/common.sh@20 -- # local mem_f mem 00:05:25.953 13:19:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.953 13:19:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.953 13:19:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.953 13:19:31 -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.953 13:19:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.953 13:19:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6512472 kB' 'MemAvailable: 9442708 kB' 'Buffers: 3704 kB' 'Cached: 3130020 kB' 'SwapCached: 0 kB' 'Active: 497852 kB' 'Inactive: 2754052 kB' 'Active(anon): 128668 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119752 kB' 'Mapped: 50740 kB' 'Shmem: 10488 kB' 'KReclaimable: 88304 kB' 'Slab: 191788 kB' 'SReclaimable: 88304 kB' 'SUnreclaim: 103484 kB' 'KernelStack: 6736 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 4022272 kB' 'DirectMap1G: 10485760 kB' 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.953 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.953 13:19:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.954 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.954 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.955 13:19:31 -- setup/common.sh@33 -- # echo 1024 00:05:25.955 13:19:31 -- setup/common.sh@33 -- # return 0 00:05:25.955 13:19:31 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:25.955 13:19:31 -- setup/hugepages.sh@112 -- # get_nodes 00:05:25.955 13:19:31 -- setup/hugepages.sh@27 -- # local node 00:05:25.955 13:19:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:25.955 13:19:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:25.955 13:19:31 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:25.955 13:19:31 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:25.955 13:19:31 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:25.955 13:19:31 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:25.955 13:19:31 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:25.955 13:19:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:25.955 13:19:31 -- setup/common.sh@18 -- # local node=0 00:05:25.955 13:19:31 -- setup/common.sh@19 -- # local var val 00:05:25.955 13:19:31 -- setup/common.sh@20 -- # local mem_f mem 00:05:25.955 13:19:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.955 13:19:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:25.955 13:19:31 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:25.955 13:19:31 -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.955 13:19:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6512472 kB' 'MemUsed: 5726632 kB' 'SwapCached: 0 kB' 'Active: 497732 kB' 'Inactive: 2754052 kB' 'Active(anon): 128548 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3133724 kB' 'Mapped: 50740 kB' 'AnonPages: 119628 kB' 'Shmem: 10488 kB' 'KernelStack: 6720 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88304 kB' 'Slab: 191788 kB' 'SReclaimable: 88304 kB' 'SUnreclaim: 103484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.955 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.955 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.956 13:19:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.956 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.956 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.956 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.956 13:19:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.956 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.956 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.956 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.956 13:19:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.956 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.956 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.956 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.956 13:19:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.956 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.956 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.956 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.956 13:19:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.956 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.956 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.956 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.956 13:19:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.956 13:19:31 -- setup/common.sh@32 -- # continue 00:05:25.956 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:25.956 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:25.956 13:19:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.956 13:19:31 -- setup/common.sh@33 -- # echo 0 00:05:25.956 13:19:31 -- setup/common.sh@33 -- # return 0 00:05:25.956 13:19:31 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:25.956 13:19:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:25.956 13:19:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:25.956 13:19:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:25.956 node0=1024 expecting 1024 00:05:25.956 13:19:31 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:25.956 13:19:31 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:25.956 13:19:31 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:25.956 13:19:31 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:25.956 13:19:31 -- setup/hugepages.sh@202 -- # setup output 00:05:25.956 13:19:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.956 13:19:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:26.214 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:26.214 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:26.214 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:26.214 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:26.477 13:19:31 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:26.477 13:19:31 -- setup/hugepages.sh@89 -- # local node 00:05:26.477 13:19:31 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:26.477 13:19:31 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:26.477 13:19:31 -- setup/hugepages.sh@92 -- # local surp 00:05:26.477 13:19:31 -- setup/hugepages.sh@93 -- # local resv 00:05:26.477 13:19:31 -- setup/hugepages.sh@94 -- # local anon 00:05:26.477 13:19:31 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:26.477 13:19:31 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:26.477 13:19:31 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:26.477 13:19:31 -- setup/common.sh@18 -- # local node= 00:05:26.477 13:19:31 -- setup/common.sh@19 -- # local var val 00:05:26.477 13:19:31 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.477 13:19:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.477 13:19:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.477 13:19:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.477 13:19:31 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.477 13:19:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.477 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.477 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.477 13:19:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6512644 kB' 'MemAvailable: 9442880 kB' 'Buffers: 3704 kB' 'Cached: 3130020 kB' 'SwapCached: 0 kB' 'Active: 498828 kB' 'Inactive: 2754052 kB' 'Active(anon): 129644 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120604 kB' 'Mapped: 50856 kB' 'Shmem: 10488 kB' 'KReclaimable: 88304 kB' 'Slab: 191768 kB' 'SReclaimable: 88304 kB' 'SUnreclaim: 103464 kB' 'KernelStack: 6856 kB' 'PageTables: 4788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 4022272 kB' 'DirectMap1G: 10485760 kB' 00:05:26.477 13:19:31 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.477 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.477 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.477 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.477 13:19:31 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.477 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.477 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.477 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.477 13:19:31 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.477 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.477 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.477 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.477 13:19:31 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.477 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.477 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.477 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.477 13:19:31 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.477 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.477 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.477 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.477 13:19:31 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.477 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.477 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.477 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.477 13:19:31 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.477 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.477 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.477 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.477 13:19:31 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.477 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.477 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.477 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.477 13:19:31 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.477 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.477 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.477 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.477 13:19:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.478 13:19:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.478 13:19:31 -- setup/common.sh@33 -- # echo 0 00:05:26.478 13:19:31 -- setup/common.sh@33 -- # return 0 00:05:26.478 13:19:31 -- setup/hugepages.sh@97 -- # anon=0 00:05:26.478 13:19:31 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:26.478 13:19:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:26.478 13:19:31 -- setup/common.sh@18 -- # local node= 00:05:26.478 13:19:31 -- setup/common.sh@19 -- # local var val 00:05:26.478 13:19:31 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.478 13:19:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.478 13:19:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.478 13:19:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.478 13:19:31 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.478 13:19:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.478 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6512644 kB' 'MemAvailable: 9442880 kB' 'Buffers: 3704 kB' 'Cached: 3130020 kB' 'SwapCached: 0 kB' 'Active: 498220 kB' 'Inactive: 2754052 kB' 'Active(anon): 129036 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120208 kB' 'Mapped: 50748 kB' 'Shmem: 10488 kB' 'KReclaimable: 88304 kB' 'Slab: 191760 kB' 'SReclaimable: 88304 kB' 'SUnreclaim: 103456 kB' 'KernelStack: 6808 kB' 'PageTables: 4624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 4022272 kB' 'DirectMap1G: 10485760 kB' 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.479 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.479 13:19:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.480 13:19:31 -- setup/common.sh@33 -- # echo 0 00:05:26.480 13:19:31 -- setup/common.sh@33 -- # return 0 00:05:26.480 13:19:31 -- setup/hugepages.sh@99 -- # surp=0 00:05:26.480 13:19:31 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:26.480 13:19:31 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:26.480 13:19:31 -- setup/common.sh@18 -- # local node= 00:05:26.480 13:19:31 -- setup/common.sh@19 -- # local var val 00:05:26.480 13:19:31 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.480 13:19:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.480 13:19:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.480 13:19:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.480 13:19:31 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.480 13:19:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6512888 kB' 'MemAvailable: 9443124 kB' 'Buffers: 3704 kB' 'Cached: 3130020 kB' 'SwapCached: 0 kB' 'Active: 498020 kB' 'Inactive: 2754052 kB' 'Active(anon): 128836 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119684 kB' 'Mapped: 50840 kB' 'Shmem: 10488 kB' 'KReclaimable: 88304 kB' 'Slab: 191756 kB' 'SReclaimable: 88304 kB' 'SUnreclaim: 103452 kB' 'KernelStack: 6728 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 4022272 kB' 'DirectMap1G: 10485760 kB' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.480 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.480 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.481 13:19:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.481 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.481 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.481 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.481 13:19:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.481 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.481 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.481 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.481 13:19:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.481 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.481 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.481 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.481 13:19:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.481 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.481 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.481 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.481 13:19:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.481 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.481 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.481 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.481 13:19:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.481 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.481 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.481 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.481 13:19:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.481 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.481 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.481 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.481 13:19:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.481 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.481 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.481 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.481 13:19:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.481 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.481 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.481 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.481 13:19:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.481 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.481 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.481 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.481 13:19:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.481 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.481 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.481 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.481 13:19:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.481 13:19:31 -- setup/common.sh@32 -- # continue 00:05:26.481 13:19:31 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.481 13:19:31 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.481 13:19:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.481 13:19:32 -- setup/common.sh@33 -- # echo 0 00:05:26.481 13:19:32 -- setup/common.sh@33 -- # return 0 00:05:26.481 nr_hugepages=1024 00:05:26.481 13:19:32 -- setup/hugepages.sh@100 -- # resv=0 00:05:26.481 13:19:32 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:26.481 resv_hugepages=0 00:05:26.481 13:19:32 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:26.481 surplus_hugepages=0 00:05:26.481 13:19:32 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:26.481 anon_hugepages=0 00:05:26.481 13:19:32 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:26.481 13:19:32 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:26.481 13:19:32 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:26.481 13:19:32 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:26.481 13:19:32 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:26.481 13:19:32 -- setup/common.sh@18 -- # local node= 00:05:26.481 13:19:32 -- setup/common.sh@19 -- # local var val 00:05:26.481 13:19:32 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.481 13:19:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.481 13:19:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.481 13:19:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.481 13:19:32 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.481 13:19:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.481 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6512888 kB' 'MemAvailable: 9443124 kB' 'Buffers: 3704 kB' 'Cached: 3130020 kB' 'SwapCached: 0 kB' 'Active: 497904 kB' 'Inactive: 2754052 kB' 'Active(anon): 128720 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119800 kB' 'Mapped: 50740 kB' 'Shmem: 10488 kB' 'KReclaimable: 88304 kB' 'Slab: 191756 kB' 'SReclaimable: 88304 kB' 'SUnreclaim: 103452 kB' 'KernelStack: 6752 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 324820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 4022272 kB' 'DirectMap1G: 10485760 kB' 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.482 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.482 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.483 13:19:32 -- setup/common.sh@33 -- # echo 1024 00:05:26.483 13:19:32 -- setup/common.sh@33 -- # return 0 00:05:26.483 13:19:32 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:26.483 13:19:32 -- setup/hugepages.sh@112 -- # get_nodes 00:05:26.483 13:19:32 -- setup/hugepages.sh@27 -- # local node 00:05:26.483 13:19:32 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:26.483 13:19:32 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:26.483 13:19:32 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:26.483 13:19:32 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:26.483 13:19:32 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:26.483 13:19:32 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:26.483 13:19:32 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:26.483 13:19:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:26.483 13:19:32 -- setup/common.sh@18 -- # local node=0 00:05:26.483 13:19:32 -- setup/common.sh@19 -- # local var val 00:05:26.483 13:19:32 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.483 13:19:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.483 13:19:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:26.483 13:19:32 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:26.483 13:19:32 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.483 13:19:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.483 13:19:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6514180 kB' 'MemUsed: 5724924 kB' 'SwapCached: 0 kB' 'Active: 497856 kB' 'Inactive: 2754052 kB' 'Active(anon): 128672 kB' 'Inactive(anon): 0 kB' 'Active(file): 369184 kB' 'Inactive(file): 2754052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3133728 kB' 'Mapped: 50740 kB' 'AnonPages: 119816 kB' 'Shmem: 10488 kB' 'KernelStack: 6768 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88304 kB' 'Slab: 191744 kB' 'SReclaimable: 88304 kB' 'SUnreclaim: 103440 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.483 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.483 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # continue 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.484 13:19:32 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.484 13:19:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.484 13:19:32 -- setup/common.sh@33 -- # echo 0 00:05:26.484 13:19:32 -- setup/common.sh@33 -- # return 0 00:05:26.484 13:19:32 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:26.484 13:19:32 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:26.484 13:19:32 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:26.484 13:19:32 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:26.484 node0=1024 expecting 1024 00:05:26.484 ************************************ 00:05:26.484 END TEST no_shrink_alloc 00:05:26.484 ************************************ 00:05:26.484 13:19:32 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:26.484 13:19:32 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:26.484 00:05:26.484 real 0m1.075s 00:05:26.484 user 0m0.543s 00:05:26.484 sys 0m0.557s 00:05:26.484 13:19:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:26.484 13:19:32 -- common/autotest_common.sh@10 -- # set +x 00:05:26.484 13:19:32 -- setup/hugepages.sh@217 -- # clear_hp 00:05:26.484 13:19:32 -- setup/hugepages.sh@37 -- # local node hp 00:05:26.484 13:19:32 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:26.484 13:19:32 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:26.484 13:19:32 -- setup/hugepages.sh@41 -- # echo 0 00:05:26.484 13:19:32 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:26.484 13:19:32 -- setup/hugepages.sh@41 -- # echo 0 00:05:26.484 13:19:32 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:26.484 13:19:32 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:26.484 ************************************ 00:05:26.484 END TEST hugepages 00:05:26.484 ************************************ 00:05:26.484 00:05:26.484 real 0m4.661s 00:05:26.484 user 0m2.298s 00:05:26.484 sys 0m2.447s 00:05:26.484 13:19:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:26.484 13:19:32 -- common/autotest_common.sh@10 -- # set +x 00:05:26.744 13:19:32 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:26.744 13:19:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:26.744 13:19:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.744 13:19:32 -- common/autotest_common.sh@10 -- # set +x 00:05:26.744 ************************************ 00:05:26.744 START TEST driver 00:05:26.744 ************************************ 00:05:26.744 13:19:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:26.744 * Looking for test storage... 00:05:26.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:26.744 13:19:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:26.744 13:19:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:26.744 13:19:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:26.744 13:19:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:26.744 13:19:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:26.744 13:19:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:26.744 13:19:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:26.744 13:19:32 -- scripts/common.sh@335 -- # IFS=.-: 00:05:26.744 13:19:32 -- scripts/common.sh@335 -- # read -ra ver1 00:05:26.744 13:19:32 -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.744 13:19:32 -- scripts/common.sh@336 -- # read -ra ver2 00:05:26.744 13:19:32 -- scripts/common.sh@337 -- # local 'op=<' 00:05:26.744 13:19:32 -- scripts/common.sh@339 -- # ver1_l=2 00:05:26.744 13:19:32 -- scripts/common.sh@340 -- # ver2_l=1 00:05:26.744 13:19:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:26.744 13:19:32 -- scripts/common.sh@343 -- # case "$op" in 00:05:26.744 13:19:32 -- scripts/common.sh@344 -- # : 1 00:05:26.744 13:19:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:26.744 13:19:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.744 13:19:32 -- scripts/common.sh@364 -- # decimal 1 00:05:26.744 13:19:32 -- scripts/common.sh@352 -- # local d=1 00:05:26.744 13:19:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.744 13:19:32 -- scripts/common.sh@354 -- # echo 1 00:05:26.744 13:19:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:26.744 13:19:32 -- scripts/common.sh@365 -- # decimal 2 00:05:26.744 13:19:32 -- scripts/common.sh@352 -- # local d=2 00:05:26.744 13:19:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.744 13:19:32 -- scripts/common.sh@354 -- # echo 2 00:05:26.744 13:19:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:26.744 13:19:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:26.744 13:19:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:26.744 13:19:32 -- scripts/common.sh@367 -- # return 0 00:05:26.744 13:19:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.744 13:19:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:26.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.744 --rc genhtml_branch_coverage=1 00:05:26.744 --rc genhtml_function_coverage=1 00:05:26.744 --rc genhtml_legend=1 00:05:26.744 --rc geninfo_all_blocks=1 00:05:26.744 --rc geninfo_unexecuted_blocks=1 00:05:26.744 00:05:26.744 ' 00:05:26.744 13:19:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:26.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.744 --rc genhtml_branch_coverage=1 00:05:26.744 --rc genhtml_function_coverage=1 00:05:26.744 --rc genhtml_legend=1 00:05:26.744 --rc geninfo_all_blocks=1 00:05:26.744 --rc geninfo_unexecuted_blocks=1 00:05:26.744 00:05:26.744 ' 00:05:26.744 13:19:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:26.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.744 --rc genhtml_branch_coverage=1 00:05:26.744 --rc genhtml_function_coverage=1 00:05:26.744 --rc genhtml_legend=1 00:05:26.744 --rc geninfo_all_blocks=1 00:05:26.744 --rc geninfo_unexecuted_blocks=1 00:05:26.744 00:05:26.744 ' 00:05:26.744 13:19:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:26.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.744 --rc genhtml_branch_coverage=1 00:05:26.744 --rc genhtml_function_coverage=1 00:05:26.744 --rc genhtml_legend=1 00:05:26.744 --rc geninfo_all_blocks=1 00:05:26.744 --rc geninfo_unexecuted_blocks=1 00:05:26.744 00:05:26.744 ' 00:05:26.744 13:19:32 -- setup/driver.sh@68 -- # setup reset 00:05:26.744 13:19:32 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:26.744 13:19:32 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:27.311 13:19:32 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:27.312 13:19:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.312 13:19:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.312 13:19:32 -- common/autotest_common.sh@10 -- # set +x 00:05:27.312 ************************************ 00:05:27.312 START TEST guess_driver 00:05:27.312 ************************************ 00:05:27.312 13:19:32 -- common/autotest_common.sh@1114 -- # guess_driver 00:05:27.312 13:19:32 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:27.312 13:19:32 -- setup/driver.sh@47 -- # local fail=0 00:05:27.312 13:19:32 -- setup/driver.sh@49 -- # pick_driver 00:05:27.312 13:19:32 -- setup/driver.sh@36 -- # vfio 00:05:27.312 13:19:32 -- setup/driver.sh@21 -- # local iommu_grups 00:05:27.312 13:19:32 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:27.312 13:19:32 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:27.312 13:19:32 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:27.312 13:19:32 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:27.312 13:19:32 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:27.312 13:19:32 -- setup/driver.sh@32 -- # return 1 00:05:27.312 13:19:32 -- setup/driver.sh@38 -- # uio 00:05:27.312 13:19:32 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:27.312 13:19:32 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:27.312 13:19:32 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:27.312 13:19:32 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:27.312 13:19:32 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:27.312 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:27.312 13:19:32 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:27.312 Looking for driver=uio_pci_generic 00:05:27.312 13:19:32 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:27.312 13:19:32 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:27.312 13:19:32 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:27.312 13:19:32 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:27.312 13:19:32 -- setup/driver.sh@45 -- # setup output config 00:05:27.312 13:19:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:27.312 13:19:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:27.879 13:19:33 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:27.879 13:19:33 -- setup/driver.sh@58 -- # continue 00:05:27.879 13:19:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:28.138 13:19:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:28.138 13:19:33 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:28.138 13:19:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:28.138 13:19:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:28.138 13:19:33 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:28.138 13:19:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:28.138 13:19:33 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:28.138 13:19:33 -- setup/driver.sh@65 -- # setup reset 00:05:28.138 13:19:33 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:28.138 13:19:33 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:28.735 00:05:28.735 real 0m1.388s 00:05:28.735 user 0m0.560s 00:05:28.735 sys 0m0.829s 00:05:28.735 13:19:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:28.735 13:19:34 -- common/autotest_common.sh@10 -- # set +x 00:05:28.735 ************************************ 00:05:28.735 END TEST guess_driver 00:05:28.735 ************************************ 00:05:28.735 ************************************ 00:05:28.735 END TEST driver 00:05:28.735 ************************************ 00:05:28.735 00:05:28.735 real 0m2.163s 00:05:28.735 user 0m0.883s 00:05:28.735 sys 0m1.337s 00:05:28.735 13:19:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:28.735 13:19:34 -- common/autotest_common.sh@10 -- # set +x 00:05:28.735 13:19:34 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:28.735 13:19:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:28.735 13:19:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.735 13:19:34 -- common/autotest_common.sh@10 -- # set +x 00:05:28.735 ************************************ 00:05:28.735 START TEST devices 00:05:28.735 ************************************ 00:05:28.735 13:19:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:28.994 * Looking for test storage... 00:05:28.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:28.994 13:19:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:28.994 13:19:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:28.994 13:19:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:28.994 13:19:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:28.994 13:19:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:28.994 13:19:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:28.994 13:19:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:28.994 13:19:34 -- scripts/common.sh@335 -- # IFS=.-: 00:05:28.994 13:19:34 -- scripts/common.sh@335 -- # read -ra ver1 00:05:28.994 13:19:34 -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.994 13:19:34 -- scripts/common.sh@336 -- # read -ra ver2 00:05:28.994 13:19:34 -- scripts/common.sh@337 -- # local 'op=<' 00:05:28.994 13:19:34 -- scripts/common.sh@339 -- # ver1_l=2 00:05:28.994 13:19:34 -- scripts/common.sh@340 -- # ver2_l=1 00:05:28.994 13:19:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:28.994 13:19:34 -- scripts/common.sh@343 -- # case "$op" in 00:05:28.994 13:19:34 -- scripts/common.sh@344 -- # : 1 00:05:28.994 13:19:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:28.994 13:19:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.994 13:19:34 -- scripts/common.sh@364 -- # decimal 1 00:05:28.994 13:19:34 -- scripts/common.sh@352 -- # local d=1 00:05:28.994 13:19:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.994 13:19:34 -- scripts/common.sh@354 -- # echo 1 00:05:28.994 13:19:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:28.994 13:19:34 -- scripts/common.sh@365 -- # decimal 2 00:05:28.994 13:19:34 -- scripts/common.sh@352 -- # local d=2 00:05:28.994 13:19:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.994 13:19:34 -- scripts/common.sh@354 -- # echo 2 00:05:28.994 13:19:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:28.994 13:19:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:28.994 13:19:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:28.994 13:19:34 -- scripts/common.sh@367 -- # return 0 00:05:28.994 13:19:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.994 13:19:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:28.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.995 --rc genhtml_branch_coverage=1 00:05:28.995 --rc genhtml_function_coverage=1 00:05:28.995 --rc genhtml_legend=1 00:05:28.995 --rc geninfo_all_blocks=1 00:05:28.995 --rc geninfo_unexecuted_blocks=1 00:05:28.995 00:05:28.995 ' 00:05:28.995 13:19:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:28.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.995 --rc genhtml_branch_coverage=1 00:05:28.995 --rc genhtml_function_coverage=1 00:05:28.995 --rc genhtml_legend=1 00:05:28.995 --rc geninfo_all_blocks=1 00:05:28.995 --rc geninfo_unexecuted_blocks=1 00:05:28.995 00:05:28.995 ' 00:05:28.995 13:19:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:28.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.995 --rc genhtml_branch_coverage=1 00:05:28.995 --rc genhtml_function_coverage=1 00:05:28.995 --rc genhtml_legend=1 00:05:28.995 --rc geninfo_all_blocks=1 00:05:28.995 --rc geninfo_unexecuted_blocks=1 00:05:28.995 00:05:28.995 ' 00:05:28.995 13:19:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:28.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.995 --rc genhtml_branch_coverage=1 00:05:28.995 --rc genhtml_function_coverage=1 00:05:28.995 --rc genhtml_legend=1 00:05:28.995 --rc geninfo_all_blocks=1 00:05:28.995 --rc geninfo_unexecuted_blocks=1 00:05:28.995 00:05:28.995 ' 00:05:28.995 13:19:34 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:28.995 13:19:34 -- setup/devices.sh@192 -- # setup reset 00:05:28.995 13:19:34 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:28.995 13:19:34 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:29.933 13:19:35 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:29.933 13:19:35 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:29.933 13:19:35 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:29.933 13:19:35 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:29.933 13:19:35 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:29.933 13:19:35 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:29.933 13:19:35 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:29.933 13:19:35 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:29.933 13:19:35 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:29.933 13:19:35 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:29.933 13:19:35 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:29.933 13:19:35 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:29.933 13:19:35 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:29.933 13:19:35 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:29.933 13:19:35 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:29.933 13:19:35 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:29.933 13:19:35 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:29.933 13:19:35 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:29.933 13:19:35 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:29.933 13:19:35 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:29.933 13:19:35 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:29.933 13:19:35 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:29.933 13:19:35 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:29.933 13:19:35 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:29.933 13:19:35 -- setup/devices.sh@196 -- # blocks=() 00:05:29.933 13:19:35 -- setup/devices.sh@196 -- # declare -a blocks 00:05:29.933 13:19:35 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:29.933 13:19:35 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:29.933 13:19:35 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:29.933 13:19:35 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:29.933 13:19:35 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:29.933 13:19:35 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:29.933 13:19:35 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:29.933 13:19:35 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:29.933 13:19:35 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:29.933 13:19:35 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:29.933 13:19:35 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:29.933 No valid GPT data, bailing 00:05:29.933 13:19:35 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:29.933 13:19:35 -- scripts/common.sh@393 -- # pt= 00:05:29.933 13:19:35 -- scripts/common.sh@394 -- # return 1 00:05:29.933 13:19:35 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:29.933 13:19:35 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:29.933 13:19:35 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:29.933 13:19:35 -- setup/common.sh@80 -- # echo 5368709120 00:05:29.933 13:19:35 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:29.933 13:19:35 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:29.933 13:19:35 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:29.933 13:19:35 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:29.933 13:19:35 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:29.933 13:19:35 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:29.933 13:19:35 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:29.933 13:19:35 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:29.933 13:19:35 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:29.933 13:19:35 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:29.933 13:19:35 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:29.933 No valid GPT data, bailing 00:05:29.933 13:19:35 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:29.933 13:19:35 -- scripts/common.sh@393 -- # pt= 00:05:29.933 13:19:35 -- scripts/common.sh@394 -- # return 1 00:05:29.933 13:19:35 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:29.933 13:19:35 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:29.933 13:19:35 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:29.933 13:19:35 -- setup/common.sh@80 -- # echo 4294967296 00:05:29.933 13:19:35 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:29.933 13:19:35 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:29.933 13:19:35 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:29.933 13:19:35 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:29.933 13:19:35 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:29.933 13:19:35 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:29.933 13:19:35 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:29.933 13:19:35 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:29.933 13:19:35 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:29.933 13:19:35 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:29.933 13:19:35 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:29.933 No valid GPT data, bailing 00:05:29.933 13:19:35 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:29.933 13:19:35 -- scripts/common.sh@393 -- # pt= 00:05:29.933 13:19:35 -- scripts/common.sh@394 -- # return 1 00:05:29.933 13:19:35 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:29.933 13:19:35 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:29.933 13:19:35 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:29.933 13:19:35 -- setup/common.sh@80 -- # echo 4294967296 00:05:29.933 13:19:35 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:29.933 13:19:35 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:29.933 13:19:35 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:29.933 13:19:35 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:29.933 13:19:35 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:29.933 13:19:35 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:29.933 13:19:35 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:29.933 13:19:35 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:29.933 13:19:35 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:29.933 13:19:35 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:29.933 13:19:35 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:29.933 No valid GPT data, bailing 00:05:29.933 13:19:35 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:29.933 13:19:35 -- scripts/common.sh@393 -- # pt= 00:05:29.933 13:19:35 -- scripts/common.sh@394 -- # return 1 00:05:29.933 13:19:35 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:29.933 13:19:35 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:29.933 13:19:35 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:29.933 13:19:35 -- setup/common.sh@80 -- # echo 4294967296 00:05:29.933 13:19:35 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:29.933 13:19:35 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:29.933 13:19:35 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:29.933 13:19:35 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:29.933 13:19:35 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:29.933 13:19:35 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:29.933 13:19:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:29.933 13:19:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.933 13:19:35 -- common/autotest_common.sh@10 -- # set +x 00:05:29.933 ************************************ 00:05:29.933 START TEST nvme_mount 00:05:29.933 ************************************ 00:05:29.933 13:19:35 -- common/autotest_common.sh@1114 -- # nvme_mount 00:05:29.933 13:19:35 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:29.933 13:19:35 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:29.933 13:19:35 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:29.933 13:19:35 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:29.933 13:19:35 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:29.933 13:19:35 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:29.933 13:19:35 -- setup/common.sh@40 -- # local part_no=1 00:05:29.933 13:19:35 -- setup/common.sh@41 -- # local size=1073741824 00:05:29.933 13:19:35 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:29.933 13:19:35 -- setup/common.sh@44 -- # parts=() 00:05:29.933 13:19:35 -- setup/common.sh@44 -- # local parts 00:05:29.934 13:19:35 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:29.934 13:19:35 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:29.934 13:19:35 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:29.934 13:19:35 -- setup/common.sh@46 -- # (( part++ )) 00:05:29.934 13:19:35 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:29.934 13:19:35 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:29.934 13:19:35 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:29.934 13:19:35 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:31.310 Creating new GPT entries in memory. 00:05:31.311 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:31.311 other utilities. 00:05:31.311 13:19:36 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:31.311 13:19:36 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:31.311 13:19:36 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:31.311 13:19:36 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:31.311 13:19:36 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:32.247 Creating new GPT entries in memory. 00:05:32.247 The operation has completed successfully. 00:05:32.247 13:19:37 -- setup/common.sh@57 -- # (( part++ )) 00:05:32.247 13:19:37 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:32.247 13:19:37 -- setup/common.sh@62 -- # wait 65840 00:05:32.247 13:19:37 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:32.247 13:19:37 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:32.247 13:19:37 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:32.247 13:19:37 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:32.247 13:19:37 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:32.247 13:19:37 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:32.247 13:19:37 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:32.247 13:19:37 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:32.247 13:19:37 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:32.247 13:19:37 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:32.247 13:19:37 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:32.247 13:19:37 -- setup/devices.sh@53 -- # local found=0 00:05:32.247 13:19:37 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:32.247 13:19:37 -- setup/devices.sh@56 -- # : 00:05:32.247 13:19:37 -- setup/devices.sh@59 -- # local pci status 00:05:32.247 13:19:37 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:32.247 13:19:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.247 13:19:37 -- setup/devices.sh@47 -- # setup output config 00:05:32.247 13:19:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.247 13:19:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:32.247 13:19:37 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:32.247 13:19:37 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:32.247 13:19:37 -- setup/devices.sh@63 -- # found=1 00:05:32.247 13:19:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.247 13:19:37 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:32.247 13:19:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.814 13:19:38 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:32.814 13:19:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.814 13:19:38 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:32.814 13:19:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.814 13:19:38 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:32.814 13:19:38 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:32.814 13:19:38 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:32.814 13:19:38 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:32.814 13:19:38 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:32.814 13:19:38 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:32.814 13:19:38 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:32.814 13:19:38 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:32.814 13:19:38 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:32.814 13:19:38 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:32.814 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:32.814 13:19:38 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:32.814 13:19:38 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:33.074 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:33.074 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:33.074 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:33.074 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:33.074 13:19:38 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:33.074 13:19:38 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:33.074 13:19:38 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:33.074 13:19:38 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:33.074 13:19:38 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:33.074 13:19:38 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:33.074 13:19:38 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:33.074 13:19:38 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:33.074 13:19:38 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:33.074 13:19:38 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:33.074 13:19:38 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:33.074 13:19:38 -- setup/devices.sh@53 -- # local found=0 00:05:33.074 13:19:38 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:33.074 13:19:38 -- setup/devices.sh@56 -- # : 00:05:33.074 13:19:38 -- setup/devices.sh@59 -- # local pci status 00:05:33.074 13:19:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.074 13:19:38 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:33.074 13:19:38 -- setup/devices.sh@47 -- # setup output config 00:05:33.074 13:19:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:33.074 13:19:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:33.332 13:19:38 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:33.332 13:19:38 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:33.332 13:19:38 -- setup/devices.sh@63 -- # found=1 00:05:33.332 13:19:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.332 13:19:38 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:33.332 13:19:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.591 13:19:39 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:33.591 13:19:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.591 13:19:39 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:33.591 13:19:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.849 13:19:39 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:33.849 13:19:39 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:33.849 13:19:39 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:33.849 13:19:39 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:33.849 13:19:39 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:33.849 13:19:39 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:33.849 13:19:39 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:33.849 13:19:39 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:33.849 13:19:39 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:33.849 13:19:39 -- setup/devices.sh@50 -- # local mount_point= 00:05:33.849 13:19:39 -- setup/devices.sh@51 -- # local test_file= 00:05:33.849 13:19:39 -- setup/devices.sh@53 -- # local found=0 00:05:33.850 13:19:39 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:33.850 13:19:39 -- setup/devices.sh@59 -- # local pci status 00:05:33.850 13:19:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.850 13:19:39 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:33.850 13:19:39 -- setup/devices.sh@47 -- # setup output config 00:05:33.850 13:19:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:33.850 13:19:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:34.108 13:19:39 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:34.108 13:19:39 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:34.108 13:19:39 -- setup/devices.sh@63 -- # found=1 00:05:34.108 13:19:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.108 13:19:39 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:34.108 13:19:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.367 13:19:39 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:34.367 13:19:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.367 13:19:39 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:34.367 13:19:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.626 13:19:40 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:34.626 13:19:40 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:34.626 13:19:40 -- setup/devices.sh@68 -- # return 0 00:05:34.626 13:19:40 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:34.626 13:19:40 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:34.626 13:19:40 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:34.626 13:19:40 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:34.626 13:19:40 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:34.626 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:34.626 00:05:34.626 real 0m4.464s 00:05:34.626 user 0m0.999s 00:05:34.626 sys 0m1.145s 00:05:34.626 13:19:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:34.626 ************************************ 00:05:34.626 END TEST nvme_mount 00:05:34.626 ************************************ 00:05:34.626 13:19:40 -- common/autotest_common.sh@10 -- # set +x 00:05:34.626 13:19:40 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:34.626 13:19:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.626 13:19:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.626 13:19:40 -- common/autotest_common.sh@10 -- # set +x 00:05:34.626 ************************************ 00:05:34.626 START TEST dm_mount 00:05:34.626 ************************************ 00:05:34.626 13:19:40 -- common/autotest_common.sh@1114 -- # dm_mount 00:05:34.626 13:19:40 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:34.626 13:19:40 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:34.626 13:19:40 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:34.626 13:19:40 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:34.626 13:19:40 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:34.626 13:19:40 -- setup/common.sh@40 -- # local part_no=2 00:05:34.626 13:19:40 -- setup/common.sh@41 -- # local size=1073741824 00:05:34.626 13:19:40 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:34.626 13:19:40 -- setup/common.sh@44 -- # parts=() 00:05:34.626 13:19:40 -- setup/common.sh@44 -- # local parts 00:05:34.626 13:19:40 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:34.626 13:19:40 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:34.626 13:19:40 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:34.626 13:19:40 -- setup/common.sh@46 -- # (( part++ )) 00:05:34.626 13:19:40 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:34.626 13:19:40 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:34.626 13:19:40 -- setup/common.sh@46 -- # (( part++ )) 00:05:34.626 13:19:40 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:34.626 13:19:40 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:34.626 13:19:40 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:34.626 13:19:40 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:35.561 Creating new GPT entries in memory. 00:05:35.561 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:35.561 other utilities. 00:05:35.561 13:19:41 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:35.561 13:19:41 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:35.561 13:19:41 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:35.561 13:19:41 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:35.561 13:19:41 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:36.937 Creating new GPT entries in memory. 00:05:36.937 The operation has completed successfully. 00:05:36.937 13:19:42 -- setup/common.sh@57 -- # (( part++ )) 00:05:36.937 13:19:42 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:36.937 13:19:42 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:36.937 13:19:42 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:36.937 13:19:42 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:37.873 The operation has completed successfully. 00:05:37.873 13:19:43 -- setup/common.sh@57 -- # (( part++ )) 00:05:37.873 13:19:43 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:37.873 13:19:43 -- setup/common.sh@62 -- # wait 66300 00:05:37.873 13:19:43 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:37.873 13:19:43 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:37.873 13:19:43 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:37.873 13:19:43 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:37.873 13:19:43 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:37.873 13:19:43 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:37.873 13:19:43 -- setup/devices.sh@161 -- # break 00:05:37.873 13:19:43 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:37.873 13:19:43 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:37.873 13:19:43 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:37.873 13:19:43 -- setup/devices.sh@166 -- # dm=dm-0 00:05:37.873 13:19:43 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:37.873 13:19:43 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:37.873 13:19:43 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:37.873 13:19:43 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:37.873 13:19:43 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:37.873 13:19:43 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:37.873 13:19:43 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:37.873 13:19:43 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:37.873 13:19:43 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:37.873 13:19:43 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:37.873 13:19:43 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:37.873 13:19:43 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:37.873 13:19:43 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:37.873 13:19:43 -- setup/devices.sh@53 -- # local found=0 00:05:37.873 13:19:43 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:37.873 13:19:43 -- setup/devices.sh@56 -- # : 00:05:37.873 13:19:43 -- setup/devices.sh@59 -- # local pci status 00:05:37.873 13:19:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.873 13:19:43 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:37.873 13:19:43 -- setup/devices.sh@47 -- # setup output config 00:05:37.873 13:19:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:37.873 13:19:43 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:37.873 13:19:43 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:37.873 13:19:43 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:37.873 13:19:43 -- setup/devices.sh@63 -- # found=1 00:05:37.873 13:19:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.873 13:19:43 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:37.873 13:19:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.440 13:19:43 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:38.440 13:19:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.440 13:19:43 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:38.440 13:19:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.440 13:19:43 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:38.440 13:19:43 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:38.440 13:19:43 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:38.440 13:19:43 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:38.440 13:19:43 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:38.440 13:19:43 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:38.440 13:19:43 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:38.440 13:19:43 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:38.440 13:19:43 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:38.440 13:19:43 -- setup/devices.sh@50 -- # local mount_point= 00:05:38.440 13:19:43 -- setup/devices.sh@51 -- # local test_file= 00:05:38.440 13:19:43 -- setup/devices.sh@53 -- # local found=0 00:05:38.440 13:19:43 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:38.440 13:19:43 -- setup/devices.sh@59 -- # local pci status 00:05:38.440 13:19:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.440 13:19:43 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:38.440 13:19:43 -- setup/devices.sh@47 -- # setup output config 00:05:38.440 13:19:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:38.440 13:19:43 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:38.698 13:19:44 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:38.698 13:19:44 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:38.698 13:19:44 -- setup/devices.sh@63 -- # found=1 00:05:38.698 13:19:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.698 13:19:44 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:38.698 13:19:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.956 13:19:44 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:38.956 13:19:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.956 13:19:44 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:38.956 13:19:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.956 13:19:44 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:38.956 13:19:44 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:38.956 13:19:44 -- setup/devices.sh@68 -- # return 0 00:05:38.956 13:19:44 -- setup/devices.sh@187 -- # cleanup_dm 00:05:38.956 13:19:44 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:38.956 13:19:44 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:38.957 13:19:44 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:38.957 13:19:44 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:38.957 13:19:44 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:39.214 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:39.214 13:19:44 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:39.214 13:19:44 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:39.214 00:05:39.214 real 0m4.537s 00:05:39.214 user 0m0.674s 00:05:39.214 sys 0m0.769s 00:05:39.215 13:19:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.215 13:19:44 -- common/autotest_common.sh@10 -- # set +x 00:05:39.215 ************************************ 00:05:39.215 END TEST dm_mount 00:05:39.215 ************************************ 00:05:39.215 13:19:44 -- setup/devices.sh@1 -- # cleanup 00:05:39.215 13:19:44 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:39.215 13:19:44 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:39.215 13:19:44 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:39.215 13:19:44 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:39.215 13:19:44 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:39.215 13:19:44 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:39.473 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:39.473 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:39.473 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:39.473 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:39.473 13:19:44 -- setup/devices.sh@12 -- # cleanup_dm 00:05:39.473 13:19:44 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:39.473 13:19:44 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:39.473 13:19:44 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:39.473 13:19:44 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:39.473 13:19:44 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:39.473 13:19:44 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:39.473 00:05:39.473 real 0m10.609s 00:05:39.473 user 0m2.389s 00:05:39.473 sys 0m2.510s 00:05:39.473 13:19:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.473 13:19:44 -- common/autotest_common.sh@10 -- # set +x 00:05:39.473 ************************************ 00:05:39.473 END TEST devices 00:05:39.473 ************************************ 00:05:39.473 00:05:39.473 real 0m22.107s 00:05:39.473 user 0m7.638s 00:05:39.473 sys 0m8.875s 00:05:39.473 13:19:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.473 13:19:45 -- common/autotest_common.sh@10 -- # set +x 00:05:39.473 ************************************ 00:05:39.473 END TEST setup.sh 00:05:39.473 ************************************ 00:05:39.473 13:19:45 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:39.730 Hugepages 00:05:39.730 node hugesize free / total 00:05:39.730 node0 1048576kB 0 / 0 00:05:39.730 node0 2048kB 2048 / 2048 00:05:39.730 00:05:39.730 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:39.730 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:39.730 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:39.730 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:39.987 13:19:45 -- spdk/autotest.sh@128 -- # uname -s 00:05:39.987 13:19:45 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:39.987 13:19:45 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:39.987 13:19:45 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:40.553 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:40.553 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:40.553 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:40.553 13:19:46 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:41.972 13:19:47 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:41.972 13:19:47 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:41.972 13:19:47 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:41.972 13:19:47 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:41.972 13:19:47 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:41.972 13:19:47 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:41.972 13:19:47 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:41.972 13:19:47 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:41.972 13:19:47 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:41.972 13:19:47 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:41.972 13:19:47 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:41.972 13:19:47 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:41.972 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:42.231 Waiting for block devices as requested 00:05:42.231 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:42.231 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:42.231 13:19:47 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:42.231 13:19:47 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:42.231 13:19:47 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:42.231 13:19:47 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:05:42.231 13:19:47 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:42.231 13:19:47 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:42.231 13:19:47 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:42.231 13:19:47 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:42.232 13:19:47 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:42.232 13:19:47 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:42.232 13:19:47 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:42.232 13:19:47 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:42.232 13:19:47 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:42.232 13:19:47 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:42.232 13:19:47 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:42.232 13:19:47 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:42.232 13:19:47 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:42.232 13:19:47 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:42.232 13:19:47 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:42.232 13:19:47 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:42.232 13:19:47 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:42.232 13:19:47 -- common/autotest_common.sh@1552 -- # continue 00:05:42.232 13:19:47 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:42.232 13:19:47 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:42.232 13:19:47 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:05:42.232 13:19:47 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:42.232 13:19:47 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:42.232 13:19:47 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:05:42.232 13:19:47 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:42.490 13:19:47 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:05:42.490 13:19:47 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:05:42.490 13:19:47 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:05:42.490 13:19:47 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:42.490 13:19:47 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:42.490 13:19:47 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:42.490 13:19:47 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:42.490 13:19:47 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:42.490 13:19:47 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:42.490 13:19:47 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:42.490 13:19:47 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:05:42.490 13:19:47 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:42.490 13:19:47 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:42.490 13:19:47 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:42.490 13:19:47 -- common/autotest_common.sh@1552 -- # continue 00:05:42.490 13:19:47 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:42.490 13:19:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:42.490 13:19:47 -- common/autotest_common.sh@10 -- # set +x 00:05:42.490 13:19:47 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:42.490 13:19:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:42.490 13:19:47 -- common/autotest_common.sh@10 -- # set +x 00:05:42.490 13:19:47 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:43.058 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:43.058 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.317 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.317 13:19:48 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:43.317 13:19:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:43.317 13:19:48 -- common/autotest_common.sh@10 -- # set +x 00:05:43.317 13:19:48 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:43.317 13:19:48 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:43.317 13:19:48 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:43.317 13:19:48 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:43.317 13:19:48 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:43.317 13:19:48 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:43.317 13:19:48 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:43.317 13:19:48 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:43.317 13:19:48 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:43.317 13:19:48 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:43.317 13:19:48 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:43.317 13:19:48 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:43.317 13:19:48 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:43.317 13:19:48 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:43.317 13:19:48 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:43.317 13:19:48 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:43.317 13:19:48 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:43.317 13:19:48 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:43.317 13:19:48 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:43.317 13:19:48 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:43.317 13:19:48 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:43.317 13:19:48 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:05:43.317 13:19:48 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:05:43.317 13:19:48 -- common/autotest_common.sh@1588 -- # return 0 00:05:43.317 13:19:48 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:43.317 13:19:48 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:43.317 13:19:48 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:43.317 13:19:48 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:43.317 13:19:48 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:43.317 13:19:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:43.317 13:19:48 -- common/autotest_common.sh@10 -- # set +x 00:05:43.317 13:19:48 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:43.317 13:19:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:43.317 13:19:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.317 13:19:48 -- common/autotest_common.sh@10 -- # set +x 00:05:43.317 ************************************ 00:05:43.317 START TEST env 00:05:43.317 ************************************ 00:05:43.317 13:19:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:43.576 * Looking for test storage... 00:05:43.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:43.576 13:19:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:43.576 13:19:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:43.576 13:19:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:43.576 13:19:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:43.576 13:19:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:43.576 13:19:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:43.576 13:19:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:43.576 13:19:49 -- scripts/common.sh@335 -- # IFS=.-: 00:05:43.576 13:19:49 -- scripts/common.sh@335 -- # read -ra ver1 00:05:43.576 13:19:49 -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.576 13:19:49 -- scripts/common.sh@336 -- # read -ra ver2 00:05:43.576 13:19:49 -- scripts/common.sh@337 -- # local 'op=<' 00:05:43.576 13:19:49 -- scripts/common.sh@339 -- # ver1_l=2 00:05:43.576 13:19:49 -- scripts/common.sh@340 -- # ver2_l=1 00:05:43.576 13:19:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:43.577 13:19:49 -- scripts/common.sh@343 -- # case "$op" in 00:05:43.577 13:19:49 -- scripts/common.sh@344 -- # : 1 00:05:43.577 13:19:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:43.577 13:19:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.577 13:19:49 -- scripts/common.sh@364 -- # decimal 1 00:05:43.577 13:19:49 -- scripts/common.sh@352 -- # local d=1 00:05:43.577 13:19:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.577 13:19:49 -- scripts/common.sh@354 -- # echo 1 00:05:43.577 13:19:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:43.577 13:19:49 -- scripts/common.sh@365 -- # decimal 2 00:05:43.577 13:19:49 -- scripts/common.sh@352 -- # local d=2 00:05:43.577 13:19:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.577 13:19:49 -- scripts/common.sh@354 -- # echo 2 00:05:43.577 13:19:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:43.577 13:19:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:43.577 13:19:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:43.577 13:19:49 -- scripts/common.sh@367 -- # return 0 00:05:43.577 13:19:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.577 13:19:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:43.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.577 --rc genhtml_branch_coverage=1 00:05:43.577 --rc genhtml_function_coverage=1 00:05:43.577 --rc genhtml_legend=1 00:05:43.577 --rc geninfo_all_blocks=1 00:05:43.577 --rc geninfo_unexecuted_blocks=1 00:05:43.577 00:05:43.577 ' 00:05:43.577 13:19:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:43.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.577 --rc genhtml_branch_coverage=1 00:05:43.577 --rc genhtml_function_coverage=1 00:05:43.577 --rc genhtml_legend=1 00:05:43.577 --rc geninfo_all_blocks=1 00:05:43.577 --rc geninfo_unexecuted_blocks=1 00:05:43.577 00:05:43.577 ' 00:05:43.577 13:19:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:43.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.577 --rc genhtml_branch_coverage=1 00:05:43.577 --rc genhtml_function_coverage=1 00:05:43.577 --rc genhtml_legend=1 00:05:43.577 --rc geninfo_all_blocks=1 00:05:43.577 --rc geninfo_unexecuted_blocks=1 00:05:43.577 00:05:43.577 ' 00:05:43.577 13:19:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:43.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.577 --rc genhtml_branch_coverage=1 00:05:43.577 --rc genhtml_function_coverage=1 00:05:43.577 --rc genhtml_legend=1 00:05:43.577 --rc geninfo_all_blocks=1 00:05:43.577 --rc geninfo_unexecuted_blocks=1 00:05:43.577 00:05:43.577 ' 00:05:43.577 13:19:49 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:43.577 13:19:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:43.577 13:19:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.577 13:19:49 -- common/autotest_common.sh@10 -- # set +x 00:05:43.577 ************************************ 00:05:43.577 START TEST env_memory 00:05:43.577 ************************************ 00:05:43.577 13:19:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:43.577 00:05:43.577 00:05:43.577 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.577 http://cunit.sourceforge.net/ 00:05:43.577 00:05:43.577 00:05:43.577 Suite: memory 00:05:43.577 Test: alloc and free memory map ...[2024-12-15 13:19:49.179363] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:43.577 passed 00:05:43.577 Test: mem map translation ...[2024-12-15 13:19:49.199861] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:43.577 [2024-12-15 13:19:49.200025] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:43.577 [2024-12-15 13:19:49.200209] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:43.577 [2024-12-15 13:19:49.200381] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:43.577 passed 00:05:43.577 Test: mem map registration ...[2024-12-15 13:19:49.241926] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:43.577 [2024-12-15 13:19:49.242089] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:43.577 passed 00:05:43.836 Test: mem map adjacent registrations ...passed 00:05:43.836 00:05:43.836 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.836 suites 1 1 n/a 0 0 00:05:43.836 tests 4 4 4 0 0 00:05:43.836 asserts 152 152 152 0 n/a 00:05:43.836 00:05:43.836 Elapsed time = 0.136 seconds 00:05:43.836 00:05:43.836 real 0m0.158s 00:05:43.836 user 0m0.140s 00:05:43.836 sys 0m0.013s 00:05:43.836 13:19:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:43.836 13:19:49 -- common/autotest_common.sh@10 -- # set +x 00:05:43.836 ************************************ 00:05:43.836 END TEST env_memory 00:05:43.836 ************************************ 00:05:43.836 13:19:49 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:43.836 13:19:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:43.836 13:19:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.836 13:19:49 -- common/autotest_common.sh@10 -- # set +x 00:05:43.836 ************************************ 00:05:43.836 START TEST env_vtophys 00:05:43.836 ************************************ 00:05:43.836 13:19:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:43.836 EAL: lib.eal log level changed from notice to debug 00:05:43.836 EAL: Detected lcore 0 as core 0 on socket 0 00:05:43.836 EAL: Detected lcore 1 as core 0 on socket 0 00:05:43.836 EAL: Detected lcore 2 as core 0 on socket 0 00:05:43.836 EAL: Detected lcore 3 as core 0 on socket 0 00:05:43.836 EAL: Detected lcore 4 as core 0 on socket 0 00:05:43.836 EAL: Detected lcore 5 as core 0 on socket 0 00:05:43.836 EAL: Detected lcore 6 as core 0 on socket 0 00:05:43.836 EAL: Detected lcore 7 as core 0 on socket 0 00:05:43.836 EAL: Detected lcore 8 as core 0 on socket 0 00:05:43.836 EAL: Detected lcore 9 as core 0 on socket 0 00:05:43.836 EAL: Maximum logical cores by configuration: 128 00:05:43.836 EAL: Detected CPU lcores: 10 00:05:43.836 EAL: Detected NUMA nodes: 1 00:05:43.836 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:43.836 EAL: Detected shared linkage of DPDK 00:05:43.836 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:43.836 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:43.836 EAL: Registered [vdev] bus. 00:05:43.836 EAL: bus.vdev log level changed from disabled to notice 00:05:43.836 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:43.836 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:43.836 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:43.836 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:43.836 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:43.836 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:43.836 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:43.836 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:43.836 EAL: No shared files mode enabled, IPC will be disabled 00:05:43.836 EAL: No shared files mode enabled, IPC is disabled 00:05:43.836 EAL: Selected IOVA mode 'PA' 00:05:43.836 EAL: Probing VFIO support... 00:05:43.836 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:43.836 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:43.836 EAL: Ask a virtual area of 0x2e000 bytes 00:05:43.836 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:43.836 EAL: Setting up physically contiguous memory... 00:05:43.836 EAL: Setting maximum number of open files to 524288 00:05:43.836 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:43.836 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:43.836 EAL: Ask a virtual area of 0x61000 bytes 00:05:43.836 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:43.836 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:43.836 EAL: Ask a virtual area of 0x400000000 bytes 00:05:43.836 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:43.836 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:43.836 EAL: Ask a virtual area of 0x61000 bytes 00:05:43.836 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:43.836 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:43.836 EAL: Ask a virtual area of 0x400000000 bytes 00:05:43.836 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:43.836 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:43.837 EAL: Ask a virtual area of 0x61000 bytes 00:05:43.837 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:43.837 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:43.837 EAL: Ask a virtual area of 0x400000000 bytes 00:05:43.837 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:43.837 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:43.837 EAL: Ask a virtual area of 0x61000 bytes 00:05:43.837 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:43.837 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:43.837 EAL: Ask a virtual area of 0x400000000 bytes 00:05:43.837 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:43.837 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:43.837 EAL: Hugepages will be freed exactly as allocated. 00:05:43.837 EAL: No shared files mode enabled, IPC is disabled 00:05:43.837 EAL: No shared files mode enabled, IPC is disabled 00:05:43.837 EAL: TSC frequency is ~2200000 KHz 00:05:43.837 EAL: Main lcore 0 is ready (tid=7fba78451a00;cpuset=[0]) 00:05:43.837 EAL: Trying to obtain current memory policy. 00:05:43.837 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.837 EAL: Restoring previous memory policy: 0 00:05:43.837 EAL: request: mp_malloc_sync 00:05:43.837 EAL: No shared files mode enabled, IPC is disabled 00:05:43.837 EAL: Heap on socket 0 was expanded by 2MB 00:05:43.837 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:43.837 EAL: No shared files mode enabled, IPC is disabled 00:05:43.837 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:43.837 EAL: Mem event callback 'spdk:(nil)' registered 00:05:43.837 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:43.837 00:05:43.837 00:05:43.837 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.837 http://cunit.sourceforge.net/ 00:05:43.837 00:05:43.837 00:05:43.837 Suite: components_suite 00:05:43.837 Test: vtophys_malloc_test ...passed 00:05:43.837 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:43.837 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.837 EAL: Restoring previous memory policy: 4 00:05:43.837 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.837 EAL: request: mp_malloc_sync 00:05:43.837 EAL: No shared files mode enabled, IPC is disabled 00:05:43.837 EAL: Heap on socket 0 was expanded by 4MB 00:05:43.837 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.837 EAL: request: mp_malloc_sync 00:05:43.837 EAL: No shared files mode enabled, IPC is disabled 00:05:43.837 EAL: Heap on socket 0 was shrunk by 4MB 00:05:43.837 EAL: Trying to obtain current memory policy. 00:05:43.837 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.837 EAL: Restoring previous memory policy: 4 00:05:43.837 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.837 EAL: request: mp_malloc_sync 00:05:43.837 EAL: No shared files mode enabled, IPC is disabled 00:05:43.837 EAL: Heap on socket 0 was expanded by 6MB 00:05:43.837 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.837 EAL: request: mp_malloc_sync 00:05:43.837 EAL: No shared files mode enabled, IPC is disabled 00:05:43.837 EAL: Heap on socket 0 was shrunk by 6MB 00:05:43.837 EAL: Trying to obtain current memory policy. 00:05:43.837 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.837 EAL: Restoring previous memory policy: 4 00:05:43.837 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.837 EAL: request: mp_malloc_sync 00:05:43.837 EAL: No shared files mode enabled, IPC is disabled 00:05:43.837 EAL: Heap on socket 0 was expanded by 10MB 00:05:43.837 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.837 EAL: request: mp_malloc_sync 00:05:43.837 EAL: No shared files mode enabled, IPC is disabled 00:05:43.837 EAL: Heap on socket 0 was shrunk by 10MB 00:05:43.837 EAL: Trying to obtain current memory policy. 00:05:43.837 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.837 EAL: Restoring previous memory policy: 4 00:05:43.837 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.837 EAL: request: mp_malloc_sync 00:05:43.837 EAL: No shared files mode enabled, IPC is disabled 00:05:43.837 EAL: Heap on socket 0 was expanded by 18MB 00:05:43.837 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.837 EAL: request: mp_malloc_sync 00:05:43.837 EAL: No shared files mode enabled, IPC is disabled 00:05:43.837 EAL: Heap on socket 0 was shrunk by 18MB 00:05:43.837 EAL: Trying to obtain current memory policy. 00:05:43.837 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.096 EAL: Restoring previous memory policy: 4 00:05:44.096 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.096 EAL: request: mp_malloc_sync 00:05:44.096 EAL: No shared files mode enabled, IPC is disabled 00:05:44.096 EAL: Heap on socket 0 was expanded by 34MB 00:05:44.096 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.096 EAL: request: mp_malloc_sync 00:05:44.096 EAL: No shared files mode enabled, IPC is disabled 00:05:44.096 EAL: Heap on socket 0 was shrunk by 34MB 00:05:44.096 EAL: Trying to obtain current memory policy. 00:05:44.096 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.096 EAL: Restoring previous memory policy: 4 00:05:44.096 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.096 EAL: request: mp_malloc_sync 00:05:44.096 EAL: No shared files mode enabled, IPC is disabled 00:05:44.096 EAL: Heap on socket 0 was expanded by 66MB 00:05:44.096 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.096 EAL: request: mp_malloc_sync 00:05:44.096 EAL: No shared files mode enabled, IPC is disabled 00:05:44.096 EAL: Heap on socket 0 was shrunk by 66MB 00:05:44.096 EAL: Trying to obtain current memory policy. 00:05:44.096 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.096 EAL: Restoring previous memory policy: 4 00:05:44.096 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.096 EAL: request: mp_malloc_sync 00:05:44.096 EAL: No shared files mode enabled, IPC is disabled 00:05:44.096 EAL: Heap on socket 0 was expanded by 130MB 00:05:44.096 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.096 EAL: request: mp_malloc_sync 00:05:44.096 EAL: No shared files mode enabled, IPC is disabled 00:05:44.096 EAL: Heap on socket 0 was shrunk by 130MB 00:05:44.096 EAL: Trying to obtain current memory policy. 00:05:44.096 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.096 EAL: Restoring previous memory policy: 4 00:05:44.096 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.096 EAL: request: mp_malloc_sync 00:05:44.096 EAL: No shared files mode enabled, IPC is disabled 00:05:44.096 EAL: Heap on socket 0 was expanded by 258MB 00:05:44.096 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.355 EAL: request: mp_malloc_sync 00:05:44.355 EAL: No shared files mode enabled, IPC is disabled 00:05:44.355 EAL: Heap on socket 0 was shrunk by 258MB 00:05:44.355 EAL: Trying to obtain current memory policy. 00:05:44.355 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.355 EAL: Restoring previous memory policy: 4 00:05:44.355 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.355 EAL: request: mp_malloc_sync 00:05:44.355 EAL: No shared files mode enabled, IPC is disabled 00:05:44.355 EAL: Heap on socket 0 was expanded by 514MB 00:05:44.355 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.614 EAL: request: mp_malloc_sync 00:05:44.614 EAL: No shared files mode enabled, IPC is disabled 00:05:44.614 EAL: Heap on socket 0 was shrunk by 514MB 00:05:44.614 EAL: Trying to obtain current memory policy. 00:05:44.614 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.873 EAL: Restoring previous memory policy: 4 00:05:44.873 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.873 EAL: request: mp_malloc_sync 00:05:44.873 EAL: No shared files mode enabled, IPC is disabled 00:05:44.873 EAL: Heap on socket 0 was expanded by 1026MB 00:05:44.873 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.132 passed 00:05:45.132 00:05:45.132 Run Summary: Type Total Ran Passed Failed Inactive 00:05:45.132 suites 1 1 n/a 0 0 00:05:45.132 tests 2 2 2 0 0 00:05:45.132 asserts 5379 5379 5379 0 n/a 00:05:45.132 00:05:45.132 Elapsed time = 1.199 seconds 00:05:45.132 EAL: request: mp_malloc_sync 00:05:45.132 EAL: No shared files mode enabled, IPC is disabled 00:05:45.132 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:45.132 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.132 EAL: request: mp_malloc_sync 00:05:45.132 EAL: No shared files mode enabled, IPC is disabled 00:05:45.132 EAL: Heap on socket 0 was shrunk by 2MB 00:05:45.132 EAL: No shared files mode enabled, IPC is disabled 00:05:45.132 EAL: No shared files mode enabled, IPC is disabled 00:05:45.132 EAL: No shared files mode enabled, IPC is disabled 00:05:45.132 00:05:45.132 real 0m1.406s 00:05:45.132 user 0m0.769s 00:05:45.132 sys 0m0.492s 00:05:45.132 13:19:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:45.132 13:19:50 -- common/autotest_common.sh@10 -- # set +x 00:05:45.132 ************************************ 00:05:45.132 END TEST env_vtophys 00:05:45.132 ************************************ 00:05:45.132 13:19:50 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:45.132 13:19:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.132 13:19:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.132 13:19:50 -- common/autotest_common.sh@10 -- # set +x 00:05:45.132 ************************************ 00:05:45.132 START TEST env_pci 00:05:45.132 ************************************ 00:05:45.132 13:19:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:45.391 00:05:45.391 00:05:45.391 CUnit - A unit testing framework for C - Version 2.1-3 00:05:45.391 http://cunit.sourceforge.net/ 00:05:45.391 00:05:45.391 00:05:45.391 Suite: pci 00:05:45.391 Test: pci_hook ...[2024-12-15 13:19:50.822616] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 67438 has claimed it 00:05:45.391 passed 00:05:45.391 00:05:45.391 Run Summary: Type Total Ran Passed Failed Inactive 00:05:45.391 suites 1 1 n/a 0 0 00:05:45.391 tests 1 1 1 0 0 00:05:45.391 asserts 25 25 25 0 n/a 00:05:45.391 00:05:45.391 Elapsed time = 0.002 seconds 00:05:45.391 EAL: Cannot find device (10000:00:01.0) 00:05:45.391 EAL: Failed to attach device on primary process 00:05:45.391 ************************************ 00:05:45.391 END TEST env_pci 00:05:45.391 ************************************ 00:05:45.391 00:05:45.391 real 0m0.022s 00:05:45.391 user 0m0.012s 00:05:45.391 sys 0m0.009s 00:05:45.391 13:19:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:45.391 13:19:50 -- common/autotest_common.sh@10 -- # set +x 00:05:45.391 13:19:50 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:45.391 13:19:50 -- env/env.sh@15 -- # uname 00:05:45.391 13:19:50 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:45.391 13:19:50 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:45.391 13:19:50 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:45.391 13:19:50 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:45.391 13:19:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.391 13:19:50 -- common/autotest_common.sh@10 -- # set +x 00:05:45.391 ************************************ 00:05:45.391 START TEST env_dpdk_post_init 00:05:45.391 ************************************ 00:05:45.391 13:19:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:45.391 EAL: Detected CPU lcores: 10 00:05:45.391 EAL: Detected NUMA nodes: 1 00:05:45.391 EAL: Detected shared linkage of DPDK 00:05:45.391 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:45.391 EAL: Selected IOVA mode 'PA' 00:05:45.391 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:45.391 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:45.391 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:45.391 Starting DPDK initialization... 00:05:45.391 Starting SPDK post initialization... 00:05:45.391 SPDK NVMe probe 00:05:45.391 Attaching to 0000:00:06.0 00:05:45.391 Attaching to 0000:00:07.0 00:05:45.391 Attached to 0000:00:06.0 00:05:45.391 Attached to 0000:00:07.0 00:05:45.391 Cleaning up... 00:05:45.391 ************************************ 00:05:45.391 END TEST env_dpdk_post_init 00:05:45.391 ************************************ 00:05:45.391 00:05:45.391 real 0m0.173s 00:05:45.391 user 0m0.041s 00:05:45.391 sys 0m0.032s 00:05:45.391 13:19:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:45.391 13:19:51 -- common/autotest_common.sh@10 -- # set +x 00:05:45.650 13:19:51 -- env/env.sh@26 -- # uname 00:05:45.650 13:19:51 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:45.650 13:19:51 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:45.650 13:19:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.650 13:19:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.650 13:19:51 -- common/autotest_common.sh@10 -- # set +x 00:05:45.650 ************************************ 00:05:45.650 START TEST env_mem_callbacks 00:05:45.650 ************************************ 00:05:45.650 13:19:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:45.650 EAL: Detected CPU lcores: 10 00:05:45.650 EAL: Detected NUMA nodes: 1 00:05:45.650 EAL: Detected shared linkage of DPDK 00:05:45.650 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:45.650 EAL: Selected IOVA mode 'PA' 00:05:45.651 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:45.651 00:05:45.651 00:05:45.651 CUnit - A unit testing framework for C - Version 2.1-3 00:05:45.651 http://cunit.sourceforge.net/ 00:05:45.651 00:05:45.651 00:05:45.651 Suite: memory 00:05:45.651 Test: test ... 00:05:45.651 register 0x200000200000 2097152 00:05:45.651 malloc 3145728 00:05:45.651 register 0x200000400000 4194304 00:05:45.651 buf 0x200000500000 len 3145728 PASSED 00:05:45.651 malloc 64 00:05:45.651 buf 0x2000004fff40 len 64 PASSED 00:05:45.651 malloc 4194304 00:05:45.651 register 0x200000800000 6291456 00:05:45.651 buf 0x200000a00000 len 4194304 PASSED 00:05:45.651 free 0x200000500000 3145728 00:05:45.651 free 0x2000004fff40 64 00:05:45.651 unregister 0x200000400000 4194304 PASSED 00:05:45.651 free 0x200000a00000 4194304 00:05:45.651 unregister 0x200000800000 6291456 PASSED 00:05:45.651 malloc 8388608 00:05:45.651 register 0x200000400000 10485760 00:05:45.651 buf 0x200000600000 len 8388608 PASSED 00:05:45.651 free 0x200000600000 8388608 00:05:45.651 unregister 0x200000400000 10485760 PASSED 00:05:45.651 passed 00:05:45.651 00:05:45.651 Run Summary: Type Total Ran Passed Failed Inactive 00:05:45.651 suites 1 1 n/a 0 0 00:05:45.651 tests 1 1 1 0 0 00:05:45.651 asserts 15 15 15 0 n/a 00:05:45.651 00:05:45.651 Elapsed time = 0.008 seconds 00:05:45.651 00:05:45.651 real 0m0.145s 00:05:45.651 user 0m0.017s 00:05:45.651 sys 0m0.024s 00:05:45.651 ************************************ 00:05:45.651 END TEST env_mem_callbacks 00:05:45.651 ************************************ 00:05:45.651 13:19:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:45.651 13:19:51 -- common/autotest_common.sh@10 -- # set +x 00:05:45.651 ************************************ 00:05:45.651 END TEST env 00:05:45.651 ************************************ 00:05:45.651 00:05:45.651 real 0m2.347s 00:05:45.651 user 0m1.163s 00:05:45.651 sys 0m0.810s 00:05:45.651 13:19:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:45.651 13:19:51 -- common/autotest_common.sh@10 -- # set +x 00:05:45.651 13:19:51 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:45.651 13:19:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.651 13:19:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.651 13:19:51 -- common/autotest_common.sh@10 -- # set +x 00:05:45.910 ************************************ 00:05:45.910 START TEST rpc 00:05:45.910 ************************************ 00:05:45.910 13:19:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:45.910 * Looking for test storage... 00:05:45.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:45.910 13:19:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:45.910 13:19:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:45.910 13:19:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:45.910 13:19:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:45.910 13:19:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:45.910 13:19:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:45.910 13:19:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:45.910 13:19:51 -- scripts/common.sh@335 -- # IFS=.-: 00:05:45.910 13:19:51 -- scripts/common.sh@335 -- # read -ra ver1 00:05:45.910 13:19:51 -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.910 13:19:51 -- scripts/common.sh@336 -- # read -ra ver2 00:05:45.910 13:19:51 -- scripts/common.sh@337 -- # local 'op=<' 00:05:45.910 13:19:51 -- scripts/common.sh@339 -- # ver1_l=2 00:05:45.910 13:19:51 -- scripts/common.sh@340 -- # ver2_l=1 00:05:45.910 13:19:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:45.910 13:19:51 -- scripts/common.sh@343 -- # case "$op" in 00:05:45.910 13:19:51 -- scripts/common.sh@344 -- # : 1 00:05:45.910 13:19:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:45.910 13:19:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.910 13:19:51 -- scripts/common.sh@364 -- # decimal 1 00:05:45.910 13:19:51 -- scripts/common.sh@352 -- # local d=1 00:05:45.910 13:19:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.910 13:19:51 -- scripts/common.sh@354 -- # echo 1 00:05:45.910 13:19:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:45.910 13:19:51 -- scripts/common.sh@365 -- # decimal 2 00:05:45.910 13:19:51 -- scripts/common.sh@352 -- # local d=2 00:05:45.910 13:19:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.910 13:19:51 -- scripts/common.sh@354 -- # echo 2 00:05:45.910 13:19:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:45.910 13:19:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:45.910 13:19:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:45.910 13:19:51 -- scripts/common.sh@367 -- # return 0 00:05:45.910 13:19:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.910 13:19:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:45.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.910 --rc genhtml_branch_coverage=1 00:05:45.910 --rc genhtml_function_coverage=1 00:05:45.910 --rc genhtml_legend=1 00:05:45.910 --rc geninfo_all_blocks=1 00:05:45.910 --rc geninfo_unexecuted_blocks=1 00:05:45.910 00:05:45.910 ' 00:05:45.910 13:19:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:45.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.910 --rc genhtml_branch_coverage=1 00:05:45.910 --rc genhtml_function_coverage=1 00:05:45.910 --rc genhtml_legend=1 00:05:45.910 --rc geninfo_all_blocks=1 00:05:45.910 --rc geninfo_unexecuted_blocks=1 00:05:45.910 00:05:45.910 ' 00:05:45.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.910 13:19:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:45.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.910 --rc genhtml_branch_coverage=1 00:05:45.910 --rc genhtml_function_coverage=1 00:05:45.910 --rc genhtml_legend=1 00:05:45.910 --rc geninfo_all_blocks=1 00:05:45.910 --rc geninfo_unexecuted_blocks=1 00:05:45.910 00:05:45.910 ' 00:05:45.910 13:19:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:45.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.910 --rc genhtml_branch_coverage=1 00:05:45.910 --rc genhtml_function_coverage=1 00:05:45.910 --rc genhtml_legend=1 00:05:45.910 --rc geninfo_all_blocks=1 00:05:45.910 --rc geninfo_unexecuted_blocks=1 00:05:45.910 00:05:45.910 ' 00:05:45.910 13:19:51 -- rpc/rpc.sh@65 -- # spdk_pid=67555 00:05:45.910 13:19:51 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.910 13:19:51 -- rpc/rpc.sh@67 -- # waitforlisten 67555 00:05:45.910 13:19:51 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:45.910 13:19:51 -- common/autotest_common.sh@829 -- # '[' -z 67555 ']' 00:05:45.910 13:19:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.910 13:19:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.910 13:19:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.910 13:19:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.910 13:19:51 -- common/autotest_common.sh@10 -- # set +x 00:05:45.910 [2024-12-15 13:19:51.591978] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:45.910 [2024-12-15 13:19:51.592269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67555 ] 00:05:46.169 [2024-12-15 13:19:51.735246] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.169 [2024-12-15 13:19:51.797878] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.169 [2024-12-15 13:19:51.798306] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:46.169 [2024-12-15 13:19:51.798448] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 67555' to capture a snapshot of events at runtime. 00:05:46.169 [2024-12-15 13:19:51.798608] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid67555 for offline analysis/debug. 00:05:46.169 [2024-12-15 13:19:51.798912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.105 13:19:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.105 13:19:52 -- common/autotest_common.sh@862 -- # return 0 00:05:47.105 13:19:52 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:47.105 13:19:52 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:47.105 13:19:52 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:47.105 13:19:52 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:47.105 13:19:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:47.105 13:19:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.105 13:19:52 -- common/autotest_common.sh@10 -- # set +x 00:05:47.105 ************************************ 00:05:47.105 START TEST rpc_integrity 00:05:47.105 ************************************ 00:05:47.105 13:19:52 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:47.105 13:19:52 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:47.105 13:19:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.105 13:19:52 -- common/autotest_common.sh@10 -- # set +x 00:05:47.105 13:19:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.105 13:19:52 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:47.105 13:19:52 -- rpc/rpc.sh@13 -- # jq length 00:05:47.105 13:19:52 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:47.105 13:19:52 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:47.105 13:19:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.105 13:19:52 -- common/autotest_common.sh@10 -- # set +x 00:05:47.105 13:19:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.105 13:19:52 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:47.105 13:19:52 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:47.105 13:19:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.105 13:19:52 -- common/autotest_common.sh@10 -- # set +x 00:05:47.105 13:19:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.105 13:19:52 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:47.105 { 00:05:47.105 "aliases": [ 00:05:47.105 "1731724a-867d-4907-847b-44381cf537a7" 00:05:47.105 ], 00:05:47.105 "assigned_rate_limits": { 00:05:47.105 "r_mbytes_per_sec": 0, 00:05:47.105 "rw_ios_per_sec": 0, 00:05:47.105 "rw_mbytes_per_sec": 0, 00:05:47.105 "w_mbytes_per_sec": 0 00:05:47.105 }, 00:05:47.105 "block_size": 512, 00:05:47.105 "claimed": false, 00:05:47.105 "driver_specific": {}, 00:05:47.105 "memory_domains": [ 00:05:47.105 { 00:05:47.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.105 "dma_device_type": 2 00:05:47.105 } 00:05:47.105 ], 00:05:47.105 "name": "Malloc0", 00:05:47.105 "num_blocks": 16384, 00:05:47.105 "product_name": "Malloc disk", 00:05:47.105 "supported_io_types": { 00:05:47.105 "abort": true, 00:05:47.105 "compare": false, 00:05:47.105 "compare_and_write": false, 00:05:47.105 "flush": true, 00:05:47.105 "nvme_admin": false, 00:05:47.105 "nvme_io": false, 00:05:47.105 "read": true, 00:05:47.105 "reset": true, 00:05:47.105 "unmap": true, 00:05:47.105 "write": true, 00:05:47.105 "write_zeroes": true 00:05:47.105 }, 00:05:47.105 "uuid": "1731724a-867d-4907-847b-44381cf537a7", 00:05:47.105 "zoned": false 00:05:47.105 } 00:05:47.105 ]' 00:05:47.105 13:19:52 -- rpc/rpc.sh@17 -- # jq length 00:05:47.364 13:19:52 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:47.365 13:19:52 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:47.365 13:19:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.365 13:19:52 -- common/autotest_common.sh@10 -- # set +x 00:05:47.365 [2024-12-15 13:19:52.803459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:47.365 [2024-12-15 13:19:52.803513] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:47.365 [2024-12-15 13:19:52.803529] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x169bb60 00:05:47.365 [2024-12-15 13:19:52.803536] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:47.365 [2024-12-15 13:19:52.805003] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:47.365 [2024-12-15 13:19:52.805174] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:47.365 Passthru0 00:05:47.365 13:19:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.365 13:19:52 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:47.365 13:19:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.365 13:19:52 -- common/autotest_common.sh@10 -- # set +x 00:05:47.365 13:19:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.365 13:19:52 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:47.365 { 00:05:47.365 "aliases": [ 00:05:47.365 "1731724a-867d-4907-847b-44381cf537a7" 00:05:47.365 ], 00:05:47.365 "assigned_rate_limits": { 00:05:47.365 "r_mbytes_per_sec": 0, 00:05:47.365 "rw_ios_per_sec": 0, 00:05:47.365 "rw_mbytes_per_sec": 0, 00:05:47.365 "w_mbytes_per_sec": 0 00:05:47.365 }, 00:05:47.365 "block_size": 512, 00:05:47.365 "claim_type": "exclusive_write", 00:05:47.365 "claimed": true, 00:05:47.365 "driver_specific": {}, 00:05:47.365 "memory_domains": [ 00:05:47.365 { 00:05:47.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.365 "dma_device_type": 2 00:05:47.365 } 00:05:47.365 ], 00:05:47.365 "name": "Malloc0", 00:05:47.365 "num_blocks": 16384, 00:05:47.365 "product_name": "Malloc disk", 00:05:47.365 "supported_io_types": { 00:05:47.365 "abort": true, 00:05:47.365 "compare": false, 00:05:47.365 "compare_and_write": false, 00:05:47.365 "flush": true, 00:05:47.365 "nvme_admin": false, 00:05:47.365 "nvme_io": false, 00:05:47.365 "read": true, 00:05:47.365 "reset": true, 00:05:47.365 "unmap": true, 00:05:47.365 "write": true, 00:05:47.365 "write_zeroes": true 00:05:47.365 }, 00:05:47.365 "uuid": "1731724a-867d-4907-847b-44381cf537a7", 00:05:47.365 "zoned": false 00:05:47.365 }, 00:05:47.365 { 00:05:47.365 "aliases": [ 00:05:47.365 "ae04d5d7-e0e7-55b4-913d-b28a63a4e388" 00:05:47.365 ], 00:05:47.365 "assigned_rate_limits": { 00:05:47.365 "r_mbytes_per_sec": 0, 00:05:47.365 "rw_ios_per_sec": 0, 00:05:47.365 "rw_mbytes_per_sec": 0, 00:05:47.365 "w_mbytes_per_sec": 0 00:05:47.365 }, 00:05:47.365 "block_size": 512, 00:05:47.365 "claimed": false, 00:05:47.365 "driver_specific": { 00:05:47.365 "passthru": { 00:05:47.365 "base_bdev_name": "Malloc0", 00:05:47.365 "name": "Passthru0" 00:05:47.365 } 00:05:47.365 }, 00:05:47.365 "memory_domains": [ 00:05:47.365 { 00:05:47.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.365 "dma_device_type": 2 00:05:47.365 } 00:05:47.365 ], 00:05:47.365 "name": "Passthru0", 00:05:47.365 "num_blocks": 16384, 00:05:47.365 "product_name": "passthru", 00:05:47.365 "supported_io_types": { 00:05:47.365 "abort": true, 00:05:47.365 "compare": false, 00:05:47.365 "compare_and_write": false, 00:05:47.365 "flush": true, 00:05:47.365 "nvme_admin": false, 00:05:47.365 "nvme_io": false, 00:05:47.365 "read": true, 00:05:47.365 "reset": true, 00:05:47.365 "unmap": true, 00:05:47.365 "write": true, 00:05:47.365 "write_zeroes": true 00:05:47.365 }, 00:05:47.365 "uuid": "ae04d5d7-e0e7-55b4-913d-b28a63a4e388", 00:05:47.365 "zoned": false 00:05:47.365 } 00:05:47.365 ]' 00:05:47.365 13:19:52 -- rpc/rpc.sh@21 -- # jq length 00:05:47.365 13:19:52 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:47.365 13:19:52 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:47.365 13:19:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.365 13:19:52 -- common/autotest_common.sh@10 -- # set +x 00:05:47.365 13:19:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.365 13:19:52 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:47.365 13:19:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.365 13:19:52 -- common/autotest_common.sh@10 -- # set +x 00:05:47.365 13:19:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.365 13:19:52 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:47.365 13:19:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.365 13:19:52 -- common/autotest_common.sh@10 -- # set +x 00:05:47.365 13:19:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.365 13:19:52 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:47.365 13:19:52 -- rpc/rpc.sh@26 -- # jq length 00:05:47.365 ************************************ 00:05:47.365 END TEST rpc_integrity 00:05:47.365 ************************************ 00:05:47.365 13:19:52 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:47.365 00:05:47.365 real 0m0.315s 00:05:47.365 user 0m0.210s 00:05:47.365 sys 0m0.029s 00:05:47.365 13:19:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:47.365 13:19:52 -- common/autotest_common.sh@10 -- # set +x 00:05:47.365 13:19:52 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:47.365 13:19:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:47.365 13:19:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.365 13:19:52 -- common/autotest_common.sh@10 -- # set +x 00:05:47.365 ************************************ 00:05:47.365 START TEST rpc_plugins 00:05:47.365 ************************************ 00:05:47.365 13:19:53 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:47.365 13:19:53 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:47.365 13:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.365 13:19:53 -- common/autotest_common.sh@10 -- # set +x 00:05:47.365 13:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.365 13:19:53 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:47.365 13:19:53 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:47.365 13:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.365 13:19:53 -- common/autotest_common.sh@10 -- # set +x 00:05:47.365 13:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.365 13:19:53 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:47.365 { 00:05:47.365 "aliases": [ 00:05:47.365 "a75485bd-ccb6-4161-86be-b484cb4761e0" 00:05:47.365 ], 00:05:47.365 "assigned_rate_limits": { 00:05:47.365 "r_mbytes_per_sec": 0, 00:05:47.365 "rw_ios_per_sec": 0, 00:05:47.365 "rw_mbytes_per_sec": 0, 00:05:47.365 "w_mbytes_per_sec": 0 00:05:47.365 }, 00:05:47.365 "block_size": 4096, 00:05:47.365 "claimed": false, 00:05:47.365 "driver_specific": {}, 00:05:47.365 "memory_domains": [ 00:05:47.365 { 00:05:47.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.365 "dma_device_type": 2 00:05:47.365 } 00:05:47.365 ], 00:05:47.365 "name": "Malloc1", 00:05:47.365 "num_blocks": 256, 00:05:47.365 "product_name": "Malloc disk", 00:05:47.365 "supported_io_types": { 00:05:47.365 "abort": true, 00:05:47.365 "compare": false, 00:05:47.365 "compare_and_write": false, 00:05:47.365 "flush": true, 00:05:47.365 "nvme_admin": false, 00:05:47.365 "nvme_io": false, 00:05:47.365 "read": true, 00:05:47.365 "reset": true, 00:05:47.365 "unmap": true, 00:05:47.365 "write": true, 00:05:47.365 "write_zeroes": true 00:05:47.365 }, 00:05:47.365 "uuid": "a75485bd-ccb6-4161-86be-b484cb4761e0", 00:05:47.365 "zoned": false 00:05:47.365 } 00:05:47.365 ]' 00:05:47.365 13:19:53 -- rpc/rpc.sh@32 -- # jq length 00:05:47.624 13:19:53 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:47.624 13:19:53 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:47.624 13:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.624 13:19:53 -- common/autotest_common.sh@10 -- # set +x 00:05:47.624 13:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.624 13:19:53 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:47.624 13:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.624 13:19:53 -- common/autotest_common.sh@10 -- # set +x 00:05:47.624 13:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.624 13:19:53 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:47.624 13:19:53 -- rpc/rpc.sh@36 -- # jq length 00:05:47.624 13:19:53 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:47.624 00:05:47.624 real 0m0.165s 00:05:47.624 user 0m0.112s 00:05:47.624 sys 0m0.011s 00:05:47.624 13:19:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:47.624 13:19:53 -- common/autotest_common.sh@10 -- # set +x 00:05:47.624 ************************************ 00:05:47.624 END TEST rpc_plugins 00:05:47.624 ************************************ 00:05:47.624 13:19:53 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:47.624 13:19:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:47.624 13:19:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.624 13:19:53 -- common/autotest_common.sh@10 -- # set +x 00:05:47.624 ************************************ 00:05:47.624 START TEST rpc_trace_cmd_test 00:05:47.624 ************************************ 00:05:47.624 13:19:53 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:47.624 13:19:53 -- rpc/rpc.sh@40 -- # local info 00:05:47.624 13:19:53 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:47.624 13:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.624 13:19:53 -- common/autotest_common.sh@10 -- # set +x 00:05:47.624 13:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.624 13:19:53 -- rpc/rpc.sh@42 -- # info='{ 00:05:47.624 "bdev": { 00:05:47.624 "mask": "0x8", 00:05:47.624 "tpoint_mask": "0xffffffffffffffff" 00:05:47.624 }, 00:05:47.624 "bdev_nvme": { 00:05:47.624 "mask": "0x4000", 00:05:47.624 "tpoint_mask": "0x0" 00:05:47.624 }, 00:05:47.624 "blobfs": { 00:05:47.624 "mask": "0x80", 00:05:47.624 "tpoint_mask": "0x0" 00:05:47.624 }, 00:05:47.624 "dsa": { 00:05:47.624 "mask": "0x200", 00:05:47.624 "tpoint_mask": "0x0" 00:05:47.624 }, 00:05:47.624 "ftl": { 00:05:47.624 "mask": "0x40", 00:05:47.624 "tpoint_mask": "0x0" 00:05:47.624 }, 00:05:47.624 "iaa": { 00:05:47.624 "mask": "0x1000", 00:05:47.624 "tpoint_mask": "0x0" 00:05:47.624 }, 00:05:47.624 "iscsi_conn": { 00:05:47.624 "mask": "0x2", 00:05:47.624 "tpoint_mask": "0x0" 00:05:47.624 }, 00:05:47.624 "nvme_pcie": { 00:05:47.624 "mask": "0x800", 00:05:47.624 "tpoint_mask": "0x0" 00:05:47.624 }, 00:05:47.624 "nvme_tcp": { 00:05:47.624 "mask": "0x2000", 00:05:47.624 "tpoint_mask": "0x0" 00:05:47.624 }, 00:05:47.624 "nvmf_rdma": { 00:05:47.624 "mask": "0x10", 00:05:47.624 "tpoint_mask": "0x0" 00:05:47.624 }, 00:05:47.624 "nvmf_tcp": { 00:05:47.624 "mask": "0x20", 00:05:47.624 "tpoint_mask": "0x0" 00:05:47.624 }, 00:05:47.624 "scsi": { 00:05:47.624 "mask": "0x4", 00:05:47.624 "tpoint_mask": "0x0" 00:05:47.624 }, 00:05:47.624 "thread": { 00:05:47.624 "mask": "0x400", 00:05:47.624 "tpoint_mask": "0x0" 00:05:47.624 }, 00:05:47.624 "tpoint_group_mask": "0x8", 00:05:47.624 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid67555" 00:05:47.624 }' 00:05:47.624 13:19:53 -- rpc/rpc.sh@43 -- # jq length 00:05:47.624 13:19:53 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:47.624 13:19:53 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:47.883 13:19:53 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:47.883 13:19:53 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:47.883 13:19:53 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:47.883 13:19:53 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:47.883 13:19:53 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:47.883 13:19:53 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:47.883 13:19:53 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:47.883 00:05:47.883 real 0m0.275s 00:05:47.883 user 0m0.235s 00:05:47.883 sys 0m0.029s 00:05:47.883 13:19:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:47.883 13:19:53 -- common/autotest_common.sh@10 -- # set +x 00:05:47.883 ************************************ 00:05:47.883 END TEST rpc_trace_cmd_test 00:05:47.883 ************************************ 00:05:47.883 13:19:53 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:47.883 13:19:53 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:47.883 13:19:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:47.883 13:19:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.883 13:19:53 -- common/autotest_common.sh@10 -- # set +x 00:05:47.883 ************************************ 00:05:47.883 START TEST go_rpc 00:05:47.883 ************************************ 00:05:47.883 13:19:53 -- common/autotest_common.sh@1114 -- # go_rpc 00:05:47.883 13:19:53 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:47.883 13:19:53 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:47.883 13:19:53 -- rpc/rpc.sh@52 -- # jq length 00:05:48.142 13:19:53 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:48.142 13:19:53 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:48.142 13:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.142 13:19:53 -- common/autotest_common.sh@10 -- # set +x 00:05:48.142 13:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.142 13:19:53 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:48.142 13:19:53 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:48.142 13:19:53 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["2ad5f544-ad0f-4e37-b75d-a1d38764492e"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"2ad5f544-ad0f-4e37-b75d-a1d38764492e","zoned":false}]' 00:05:48.142 13:19:53 -- rpc/rpc.sh@57 -- # jq length 00:05:48.142 13:19:53 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:48.142 13:19:53 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:48.142 13:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.142 13:19:53 -- common/autotest_common.sh@10 -- # set +x 00:05:48.142 13:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.142 13:19:53 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:48.142 13:19:53 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:48.142 13:19:53 -- rpc/rpc.sh@61 -- # jq length 00:05:48.142 13:19:53 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:48.142 00:05:48.142 real 0m0.244s 00:05:48.142 user 0m0.177s 00:05:48.142 sys 0m0.031s 00:05:48.142 13:19:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:48.142 13:19:53 -- common/autotest_common.sh@10 -- # set +x 00:05:48.142 ************************************ 00:05:48.142 END TEST go_rpc 00:05:48.142 ************************************ 00:05:48.401 13:19:53 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:48.401 13:19:53 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:48.401 13:19:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:48.401 13:19:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.401 13:19:53 -- common/autotest_common.sh@10 -- # set +x 00:05:48.401 ************************************ 00:05:48.401 START TEST rpc_daemon_integrity 00:05:48.401 ************************************ 00:05:48.401 13:19:53 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:48.401 13:19:53 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:48.401 13:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.401 13:19:53 -- common/autotest_common.sh@10 -- # set +x 00:05:48.401 13:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.401 13:19:53 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:48.401 13:19:53 -- rpc/rpc.sh@13 -- # jq length 00:05:48.401 13:19:53 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:48.401 13:19:53 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:48.401 13:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.401 13:19:53 -- common/autotest_common.sh@10 -- # set +x 00:05:48.401 13:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.401 13:19:53 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:48.401 13:19:53 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:48.401 13:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.401 13:19:53 -- common/autotest_common.sh@10 -- # set +x 00:05:48.401 13:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.401 13:19:53 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:48.401 { 00:05:48.401 "aliases": [ 00:05:48.401 "28d95294-112c-41b6-9685-47117346847a" 00:05:48.401 ], 00:05:48.401 "assigned_rate_limits": { 00:05:48.401 "r_mbytes_per_sec": 0, 00:05:48.401 "rw_ios_per_sec": 0, 00:05:48.401 "rw_mbytes_per_sec": 0, 00:05:48.401 "w_mbytes_per_sec": 0 00:05:48.401 }, 00:05:48.401 "block_size": 512, 00:05:48.401 "claimed": false, 00:05:48.401 "driver_specific": {}, 00:05:48.401 "memory_domains": [ 00:05:48.401 { 00:05:48.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.401 "dma_device_type": 2 00:05:48.401 } 00:05:48.401 ], 00:05:48.401 "name": "Malloc3", 00:05:48.401 "num_blocks": 16384, 00:05:48.401 "product_name": "Malloc disk", 00:05:48.401 "supported_io_types": { 00:05:48.401 "abort": true, 00:05:48.401 "compare": false, 00:05:48.401 "compare_and_write": false, 00:05:48.401 "flush": true, 00:05:48.401 "nvme_admin": false, 00:05:48.401 "nvme_io": false, 00:05:48.401 "read": true, 00:05:48.401 "reset": true, 00:05:48.401 "unmap": true, 00:05:48.401 "write": true, 00:05:48.401 "write_zeroes": true 00:05:48.401 }, 00:05:48.401 "uuid": "28d95294-112c-41b6-9685-47117346847a", 00:05:48.401 "zoned": false 00:05:48.401 } 00:05:48.401 ]' 00:05:48.401 13:19:53 -- rpc/rpc.sh@17 -- # jq length 00:05:48.401 13:19:53 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:48.401 13:19:53 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:48.401 13:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.401 13:19:53 -- common/autotest_common.sh@10 -- # set +x 00:05:48.401 [2024-12-15 13:19:53.987903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:48.401 [2024-12-15 13:19:53.987957] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:48.401 [2024-12-15 13:19:53.987972] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x169d990 00:05:48.401 [2024-12-15 13:19:53.987981] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:48.401 [2024-12-15 13:19:53.989158] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:48.401 [2024-12-15 13:19:53.989202] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:48.401 Passthru0 00:05:48.401 13:19:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.401 13:19:53 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:48.401 13:19:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.401 13:19:53 -- common/autotest_common.sh@10 -- # set +x 00:05:48.401 13:19:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.401 13:19:54 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:48.401 { 00:05:48.401 "aliases": [ 00:05:48.401 "28d95294-112c-41b6-9685-47117346847a" 00:05:48.402 ], 00:05:48.402 "assigned_rate_limits": { 00:05:48.402 "r_mbytes_per_sec": 0, 00:05:48.402 "rw_ios_per_sec": 0, 00:05:48.402 "rw_mbytes_per_sec": 0, 00:05:48.402 "w_mbytes_per_sec": 0 00:05:48.402 }, 00:05:48.402 "block_size": 512, 00:05:48.402 "claim_type": "exclusive_write", 00:05:48.402 "claimed": true, 00:05:48.402 "driver_specific": {}, 00:05:48.402 "memory_domains": [ 00:05:48.402 { 00:05:48.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.402 "dma_device_type": 2 00:05:48.402 } 00:05:48.402 ], 00:05:48.402 "name": "Malloc3", 00:05:48.402 "num_blocks": 16384, 00:05:48.402 "product_name": "Malloc disk", 00:05:48.402 "supported_io_types": { 00:05:48.402 "abort": true, 00:05:48.402 "compare": false, 00:05:48.402 "compare_and_write": false, 00:05:48.402 "flush": true, 00:05:48.402 "nvme_admin": false, 00:05:48.402 "nvme_io": false, 00:05:48.402 "read": true, 00:05:48.402 "reset": true, 00:05:48.402 "unmap": true, 00:05:48.402 "write": true, 00:05:48.402 "write_zeroes": true 00:05:48.402 }, 00:05:48.402 "uuid": "28d95294-112c-41b6-9685-47117346847a", 00:05:48.402 "zoned": false 00:05:48.402 }, 00:05:48.402 { 00:05:48.402 "aliases": [ 00:05:48.402 "19cdfcda-ac66-55e5-a5de-809cf7e5cc63" 00:05:48.402 ], 00:05:48.402 "assigned_rate_limits": { 00:05:48.402 "r_mbytes_per_sec": 0, 00:05:48.402 "rw_ios_per_sec": 0, 00:05:48.402 "rw_mbytes_per_sec": 0, 00:05:48.402 "w_mbytes_per_sec": 0 00:05:48.402 }, 00:05:48.402 "block_size": 512, 00:05:48.402 "claimed": false, 00:05:48.402 "driver_specific": { 00:05:48.402 "passthru": { 00:05:48.402 "base_bdev_name": "Malloc3", 00:05:48.402 "name": "Passthru0" 00:05:48.402 } 00:05:48.402 }, 00:05:48.402 "memory_domains": [ 00:05:48.402 { 00:05:48.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.402 "dma_device_type": 2 00:05:48.402 } 00:05:48.402 ], 00:05:48.402 "name": "Passthru0", 00:05:48.402 "num_blocks": 16384, 00:05:48.402 "product_name": "passthru", 00:05:48.402 "supported_io_types": { 00:05:48.402 "abort": true, 00:05:48.402 "compare": false, 00:05:48.402 "compare_and_write": false, 00:05:48.402 "flush": true, 00:05:48.402 "nvme_admin": false, 00:05:48.402 "nvme_io": false, 00:05:48.402 "read": true, 00:05:48.402 "reset": true, 00:05:48.402 "unmap": true, 00:05:48.402 "write": true, 00:05:48.402 "write_zeroes": true 00:05:48.402 }, 00:05:48.402 "uuid": "19cdfcda-ac66-55e5-a5de-809cf7e5cc63", 00:05:48.402 "zoned": false 00:05:48.402 } 00:05:48.402 ]' 00:05:48.402 13:19:54 -- rpc/rpc.sh@21 -- # jq length 00:05:48.402 13:19:54 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:48.402 13:19:54 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:48.402 13:19:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.402 13:19:54 -- common/autotest_common.sh@10 -- # set +x 00:05:48.402 13:19:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.402 13:19:54 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:48.402 13:19:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.402 13:19:54 -- common/autotest_common.sh@10 -- # set +x 00:05:48.660 13:19:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.660 13:19:54 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:48.660 13:19:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.660 13:19:54 -- common/autotest_common.sh@10 -- # set +x 00:05:48.660 13:19:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.660 13:19:54 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:48.660 13:19:54 -- rpc/rpc.sh@26 -- # jq length 00:05:48.660 13:19:54 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:48.660 00:05:48.660 real 0m0.311s 00:05:48.660 user 0m0.202s 00:05:48.660 sys 0m0.035s 00:05:48.660 13:19:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:48.660 13:19:54 -- common/autotest_common.sh@10 -- # set +x 00:05:48.660 ************************************ 00:05:48.660 END TEST rpc_daemon_integrity 00:05:48.660 ************************************ 00:05:48.660 13:19:54 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:48.660 13:19:54 -- rpc/rpc.sh@84 -- # killprocess 67555 00:05:48.660 13:19:54 -- common/autotest_common.sh@936 -- # '[' -z 67555 ']' 00:05:48.660 13:19:54 -- common/autotest_common.sh@940 -- # kill -0 67555 00:05:48.660 13:19:54 -- common/autotest_common.sh@941 -- # uname 00:05:48.660 13:19:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:48.660 13:19:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67555 00:05:48.660 13:19:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:48.660 13:19:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:48.660 killing process with pid 67555 00:05:48.660 13:19:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67555' 00:05:48.660 13:19:54 -- common/autotest_common.sh@955 -- # kill 67555 00:05:48.660 13:19:54 -- common/autotest_common.sh@960 -- # wait 67555 00:05:48.919 00:05:48.919 real 0m3.224s 00:05:48.919 user 0m4.288s 00:05:48.919 sys 0m0.741s 00:05:48.919 13:19:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:48.919 13:19:54 -- common/autotest_common.sh@10 -- # set +x 00:05:48.919 ************************************ 00:05:48.919 END TEST rpc 00:05:48.919 ************************************ 00:05:49.176 13:19:54 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:49.176 13:19:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:49.176 13:19:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.176 13:19:54 -- common/autotest_common.sh@10 -- # set +x 00:05:49.176 ************************************ 00:05:49.176 START TEST rpc_client 00:05:49.176 ************************************ 00:05:49.176 13:19:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:49.176 * Looking for test storage... 00:05:49.176 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:49.176 13:19:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:49.176 13:19:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:49.176 13:19:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:49.176 13:19:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:49.176 13:19:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:49.176 13:19:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:49.176 13:19:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:49.176 13:19:54 -- scripts/common.sh@335 -- # IFS=.-: 00:05:49.176 13:19:54 -- scripts/common.sh@335 -- # read -ra ver1 00:05:49.176 13:19:54 -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.176 13:19:54 -- scripts/common.sh@336 -- # read -ra ver2 00:05:49.176 13:19:54 -- scripts/common.sh@337 -- # local 'op=<' 00:05:49.176 13:19:54 -- scripts/common.sh@339 -- # ver1_l=2 00:05:49.176 13:19:54 -- scripts/common.sh@340 -- # ver2_l=1 00:05:49.176 13:19:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:49.176 13:19:54 -- scripts/common.sh@343 -- # case "$op" in 00:05:49.176 13:19:54 -- scripts/common.sh@344 -- # : 1 00:05:49.176 13:19:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:49.176 13:19:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.176 13:19:54 -- scripts/common.sh@364 -- # decimal 1 00:05:49.176 13:19:54 -- scripts/common.sh@352 -- # local d=1 00:05:49.176 13:19:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.176 13:19:54 -- scripts/common.sh@354 -- # echo 1 00:05:49.176 13:19:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:49.176 13:19:54 -- scripts/common.sh@365 -- # decimal 2 00:05:49.176 13:19:54 -- scripts/common.sh@352 -- # local d=2 00:05:49.176 13:19:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.176 13:19:54 -- scripts/common.sh@354 -- # echo 2 00:05:49.176 13:19:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:49.176 13:19:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:49.176 13:19:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:49.176 13:19:54 -- scripts/common.sh@367 -- # return 0 00:05:49.176 13:19:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.176 13:19:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:49.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.176 --rc genhtml_branch_coverage=1 00:05:49.176 --rc genhtml_function_coverage=1 00:05:49.176 --rc genhtml_legend=1 00:05:49.176 --rc geninfo_all_blocks=1 00:05:49.176 --rc geninfo_unexecuted_blocks=1 00:05:49.176 00:05:49.176 ' 00:05:49.177 13:19:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:49.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.177 --rc genhtml_branch_coverage=1 00:05:49.177 --rc genhtml_function_coverage=1 00:05:49.177 --rc genhtml_legend=1 00:05:49.177 --rc geninfo_all_blocks=1 00:05:49.177 --rc geninfo_unexecuted_blocks=1 00:05:49.177 00:05:49.177 ' 00:05:49.177 13:19:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:49.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.177 --rc genhtml_branch_coverage=1 00:05:49.177 --rc genhtml_function_coverage=1 00:05:49.177 --rc genhtml_legend=1 00:05:49.177 --rc geninfo_all_blocks=1 00:05:49.177 --rc geninfo_unexecuted_blocks=1 00:05:49.177 00:05:49.177 ' 00:05:49.177 13:19:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:49.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.177 --rc genhtml_branch_coverage=1 00:05:49.177 --rc genhtml_function_coverage=1 00:05:49.177 --rc genhtml_legend=1 00:05:49.177 --rc geninfo_all_blocks=1 00:05:49.177 --rc geninfo_unexecuted_blocks=1 00:05:49.177 00:05:49.177 ' 00:05:49.177 13:19:54 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:49.177 OK 00:05:49.177 13:19:54 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:49.177 00:05:49.177 real 0m0.179s 00:05:49.177 user 0m0.119s 00:05:49.177 sys 0m0.071s 00:05:49.177 13:19:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:49.177 13:19:54 -- common/autotest_common.sh@10 -- # set +x 00:05:49.177 ************************************ 00:05:49.177 END TEST rpc_client 00:05:49.177 ************************************ 00:05:49.177 13:19:54 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:49.177 13:19:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:49.177 13:19:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.177 13:19:54 -- common/autotest_common.sh@10 -- # set +x 00:05:49.177 ************************************ 00:05:49.177 START TEST json_config 00:05:49.177 ************************************ 00:05:49.177 13:19:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:49.435 13:19:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:49.435 13:19:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:49.435 13:19:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:49.435 13:19:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:49.435 13:19:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:49.435 13:19:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:49.435 13:19:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:49.435 13:19:54 -- scripts/common.sh@335 -- # IFS=.-: 00:05:49.435 13:19:54 -- scripts/common.sh@335 -- # read -ra ver1 00:05:49.435 13:19:54 -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.435 13:19:54 -- scripts/common.sh@336 -- # read -ra ver2 00:05:49.435 13:19:54 -- scripts/common.sh@337 -- # local 'op=<' 00:05:49.435 13:19:54 -- scripts/common.sh@339 -- # ver1_l=2 00:05:49.435 13:19:54 -- scripts/common.sh@340 -- # ver2_l=1 00:05:49.435 13:19:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:49.435 13:19:54 -- scripts/common.sh@343 -- # case "$op" in 00:05:49.435 13:19:54 -- scripts/common.sh@344 -- # : 1 00:05:49.435 13:19:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:49.435 13:19:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.435 13:19:54 -- scripts/common.sh@364 -- # decimal 1 00:05:49.435 13:19:54 -- scripts/common.sh@352 -- # local d=1 00:05:49.435 13:19:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.435 13:19:54 -- scripts/common.sh@354 -- # echo 1 00:05:49.435 13:19:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:49.435 13:19:54 -- scripts/common.sh@365 -- # decimal 2 00:05:49.435 13:19:54 -- scripts/common.sh@352 -- # local d=2 00:05:49.435 13:19:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.436 13:19:54 -- scripts/common.sh@354 -- # echo 2 00:05:49.436 13:19:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:49.436 13:19:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:49.436 13:19:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:49.436 13:19:54 -- scripts/common.sh@367 -- # return 0 00:05:49.436 13:19:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.436 13:19:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:49.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.436 --rc genhtml_branch_coverage=1 00:05:49.436 --rc genhtml_function_coverage=1 00:05:49.436 --rc genhtml_legend=1 00:05:49.436 --rc geninfo_all_blocks=1 00:05:49.436 --rc geninfo_unexecuted_blocks=1 00:05:49.436 00:05:49.436 ' 00:05:49.436 13:19:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:49.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.436 --rc genhtml_branch_coverage=1 00:05:49.436 --rc genhtml_function_coverage=1 00:05:49.436 --rc genhtml_legend=1 00:05:49.436 --rc geninfo_all_blocks=1 00:05:49.436 --rc geninfo_unexecuted_blocks=1 00:05:49.436 00:05:49.436 ' 00:05:49.436 13:19:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:49.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.436 --rc genhtml_branch_coverage=1 00:05:49.436 --rc genhtml_function_coverage=1 00:05:49.436 --rc genhtml_legend=1 00:05:49.436 --rc geninfo_all_blocks=1 00:05:49.436 --rc geninfo_unexecuted_blocks=1 00:05:49.436 00:05:49.436 ' 00:05:49.436 13:19:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:49.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.436 --rc genhtml_branch_coverage=1 00:05:49.436 --rc genhtml_function_coverage=1 00:05:49.436 --rc genhtml_legend=1 00:05:49.436 --rc geninfo_all_blocks=1 00:05:49.436 --rc geninfo_unexecuted_blocks=1 00:05:49.436 00:05:49.436 ' 00:05:49.436 13:19:54 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:49.436 13:19:54 -- nvmf/common.sh@7 -- # uname -s 00:05:49.436 13:19:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:49.436 13:19:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:49.436 13:19:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:49.436 13:19:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:49.436 13:19:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:49.436 13:19:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:49.436 13:19:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:49.436 13:19:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:49.436 13:19:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:49.436 13:19:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:49.436 13:19:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:05:49.436 13:19:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:05:49.436 13:19:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:49.436 13:19:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:49.436 13:19:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:49.436 13:19:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:49.436 13:19:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:49.436 13:19:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:49.436 13:19:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:49.436 13:19:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.436 13:19:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.436 13:19:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.436 13:19:55 -- paths/export.sh@5 -- # export PATH 00:05:49.436 13:19:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.436 13:19:55 -- nvmf/common.sh@46 -- # : 0 00:05:49.436 13:19:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:49.436 13:19:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:49.436 13:19:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:49.436 13:19:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:49.436 13:19:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:49.436 13:19:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:49.436 13:19:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:49.436 13:19:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:49.436 13:19:55 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:49.436 13:19:55 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:49.436 13:19:55 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:49.436 13:19:55 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:49.436 13:19:55 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:49.436 13:19:55 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:49.436 13:19:55 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:49.436 13:19:55 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:49.436 13:19:55 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:49.436 13:19:55 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:49.436 13:19:55 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:49.436 13:19:55 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:49.436 13:19:55 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:49.436 13:19:55 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:49.436 INFO: JSON configuration test init 00:05:49.436 13:19:55 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:49.436 13:19:55 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:49.436 13:19:55 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:49.436 13:19:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:49.436 13:19:55 -- common/autotest_common.sh@10 -- # set +x 00:05:49.436 13:19:55 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:49.436 13:19:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:49.436 13:19:55 -- common/autotest_common.sh@10 -- # set +x 00:05:49.436 13:19:55 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:49.436 13:19:55 -- json_config/json_config.sh@98 -- # local app=target 00:05:49.436 13:19:55 -- json_config/json_config.sh@99 -- # shift 00:05:49.436 13:19:55 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:49.436 13:19:55 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:49.436 13:19:55 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:49.436 13:19:55 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:49.436 13:19:55 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:49.436 13:19:55 -- json_config/json_config.sh@111 -- # app_pid[$app]=67876 00:05:49.436 Waiting for target to run... 00:05:49.436 13:19:55 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:49.436 13:19:55 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:49.436 13:19:55 -- json_config/json_config.sh@114 -- # waitforlisten 67876 /var/tmp/spdk_tgt.sock 00:05:49.436 13:19:55 -- common/autotest_common.sh@829 -- # '[' -z 67876 ']' 00:05:49.436 13:19:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:49.436 13:19:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:49.436 13:19:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:49.436 13:19:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.436 13:19:55 -- common/autotest_common.sh@10 -- # set +x 00:05:49.436 [2024-12-15 13:19:55.088500] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:49.436 [2024-12-15 13:19:55.088608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67876 ] 00:05:50.002 [2024-12-15 13:19:55.520749] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.002 [2024-12-15 13:19:55.562850] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:50.002 [2024-12-15 13:19:55.563046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.570 13:19:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.570 13:19:56 -- common/autotest_common.sh@862 -- # return 0 00:05:50.570 00:05:50.570 13:19:56 -- json_config/json_config.sh@115 -- # echo '' 00:05:50.570 13:19:56 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:50.570 13:19:56 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:50.570 13:19:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:50.570 13:19:56 -- common/autotest_common.sh@10 -- # set +x 00:05:50.570 13:19:56 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:50.570 13:19:56 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:50.570 13:19:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:50.570 13:19:56 -- common/autotest_common.sh@10 -- # set +x 00:05:50.570 13:19:56 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:50.570 13:19:56 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:50.570 13:19:56 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:51.137 13:19:56 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:51.137 13:19:56 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:51.137 13:19:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:51.137 13:19:56 -- common/autotest_common.sh@10 -- # set +x 00:05:51.137 13:19:56 -- json_config/json_config.sh@48 -- # local ret=0 00:05:51.137 13:19:56 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:51.137 13:19:56 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:51.137 13:19:56 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:51.138 13:19:56 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:51.138 13:19:56 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:51.396 13:19:56 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:51.396 13:19:56 -- json_config/json_config.sh@51 -- # local get_types 00:05:51.396 13:19:56 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:51.396 13:19:56 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:51.396 13:19:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:51.396 13:19:56 -- common/autotest_common.sh@10 -- # set +x 00:05:51.396 13:19:56 -- json_config/json_config.sh@58 -- # return 0 00:05:51.396 13:19:56 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:51.396 13:19:56 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:51.396 13:19:56 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:51.396 13:19:56 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:51.396 13:19:56 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:51.396 13:19:56 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:51.396 13:19:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:51.396 13:19:56 -- common/autotest_common.sh@10 -- # set +x 00:05:51.396 13:19:56 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:51.396 13:19:56 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:51.396 13:19:56 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:51.396 13:19:56 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:51.396 13:19:56 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:51.655 MallocForNvmf0 00:05:51.655 13:19:57 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:51.655 13:19:57 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:51.655 MallocForNvmf1 00:05:51.913 13:19:57 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:51.913 13:19:57 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:51.913 [2024-12-15 13:19:57.535584] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:51.913 13:19:57 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:51.913 13:19:57 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:52.172 13:19:57 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:52.172 13:19:57 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:52.433 13:19:58 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:52.433 13:19:58 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:52.691 13:19:58 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:52.691 13:19:58 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:52.950 [2024-12-15 13:19:58.404070] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:52.950 13:19:58 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:52.950 13:19:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:52.950 13:19:58 -- common/autotest_common.sh@10 -- # set +x 00:05:52.950 13:19:58 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:52.950 13:19:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:52.950 13:19:58 -- common/autotest_common.sh@10 -- # set +x 00:05:52.950 13:19:58 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:52.950 13:19:58 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:52.950 13:19:58 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:53.209 MallocBdevForConfigChangeCheck 00:05:53.209 13:19:58 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:53.209 13:19:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:53.209 13:19:58 -- common/autotest_common.sh@10 -- # set +x 00:05:53.209 13:19:58 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:53.209 13:19:58 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:53.776 INFO: shutting down applications... 00:05:53.776 13:19:59 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:53.776 13:19:59 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:53.776 13:19:59 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:53.776 13:19:59 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:53.776 13:19:59 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:54.034 Calling clear_iscsi_subsystem 00:05:54.034 Calling clear_nvmf_subsystem 00:05:54.034 Calling clear_nbd_subsystem 00:05:54.034 Calling clear_ublk_subsystem 00:05:54.034 Calling clear_vhost_blk_subsystem 00:05:54.034 Calling clear_vhost_scsi_subsystem 00:05:54.034 Calling clear_scheduler_subsystem 00:05:54.034 Calling clear_bdev_subsystem 00:05:54.034 Calling clear_accel_subsystem 00:05:54.034 Calling clear_vmd_subsystem 00:05:54.034 Calling clear_sock_subsystem 00:05:54.034 Calling clear_iobuf_subsystem 00:05:54.034 13:19:59 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:54.034 13:19:59 -- json_config/json_config.sh@396 -- # count=100 00:05:54.034 13:19:59 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:54.034 13:19:59 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:54.034 13:19:59 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:54.034 13:19:59 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:54.292 13:19:59 -- json_config/json_config.sh@398 -- # break 00:05:54.293 13:19:59 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:54.293 13:19:59 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:54.293 13:19:59 -- json_config/json_config.sh@120 -- # local app=target 00:05:54.293 13:19:59 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:54.293 13:19:59 -- json_config/json_config.sh@124 -- # [[ -n 67876 ]] 00:05:54.293 13:19:59 -- json_config/json_config.sh@127 -- # kill -SIGINT 67876 00:05:54.293 13:19:59 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:54.293 13:19:59 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:54.293 13:19:59 -- json_config/json_config.sh@130 -- # kill -0 67876 00:05:54.293 13:19:59 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:54.857 13:20:00 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:54.857 13:20:00 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:54.857 13:20:00 -- json_config/json_config.sh@130 -- # kill -0 67876 00:05:54.857 13:20:00 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:54.857 13:20:00 -- json_config/json_config.sh@132 -- # break 00:05:54.857 13:20:00 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:54.857 SPDK target shutdown done 00:05:54.857 13:20:00 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:54.857 INFO: relaunching applications... 00:05:54.857 13:20:00 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:54.857 13:20:00 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:54.857 13:20:00 -- json_config/json_config.sh@98 -- # local app=target 00:05:54.857 13:20:00 -- json_config/json_config.sh@99 -- # shift 00:05:54.857 13:20:00 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:54.857 13:20:00 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:54.857 13:20:00 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:54.857 13:20:00 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:54.857 13:20:00 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:54.857 13:20:00 -- json_config/json_config.sh@111 -- # app_pid[$app]=68151 00:05:54.857 13:20:00 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:54.857 Waiting for target to run... 00:05:54.857 13:20:00 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:54.857 13:20:00 -- json_config/json_config.sh@114 -- # waitforlisten 68151 /var/tmp/spdk_tgt.sock 00:05:54.857 13:20:00 -- common/autotest_common.sh@829 -- # '[' -z 68151 ']' 00:05:54.857 13:20:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:54.858 13:20:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:54.858 13:20:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:54.858 13:20:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.858 13:20:00 -- common/autotest_common.sh@10 -- # set +x 00:05:54.858 [2024-12-15 13:20:00.377902] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:54.858 [2024-12-15 13:20:00.378710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68151 ] 00:05:55.116 [2024-12-15 13:20:00.778761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.374 [2024-12-15 13:20:00.827486] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:55.374 [2024-12-15 13:20:00.827660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.632 [2024-12-15 13:20:01.121212] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:55.632 [2024-12-15 13:20:01.153318] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:55.632 13:20:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.632 00:05:55.632 13:20:01 -- common/autotest_common.sh@862 -- # return 0 00:05:55.632 13:20:01 -- json_config/json_config.sh@115 -- # echo '' 00:05:55.632 13:20:01 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:55.632 INFO: Checking if target configuration is the same... 00:05:55.632 13:20:01 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:55.632 13:20:01 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:55.632 13:20:01 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:55.632 13:20:01 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:55.632 + '[' 2 -ne 2 ']' 00:05:55.632 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:55.632 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:55.632 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:55.890 +++ basename /dev/fd/62 00:05:55.890 ++ mktemp /tmp/62.XXX 00:05:55.890 + tmp_file_1=/tmp/62.I9W 00:05:55.890 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:55.890 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:55.890 + tmp_file_2=/tmp/spdk_tgt_config.json.HKx 00:05:55.890 + ret=0 00:05:55.890 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:56.149 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:56.149 + diff -u /tmp/62.I9W /tmp/spdk_tgt_config.json.HKx 00:05:56.149 INFO: JSON config files are the same 00:05:56.149 + echo 'INFO: JSON config files are the same' 00:05:56.149 + rm /tmp/62.I9W /tmp/spdk_tgt_config.json.HKx 00:05:56.149 + exit 0 00:05:56.149 13:20:01 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:56.149 13:20:01 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:56.149 INFO: changing configuration and checking if this can be detected... 00:05:56.149 13:20:01 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:56.149 13:20:01 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:56.408 13:20:02 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:56.408 13:20:02 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:56.408 13:20:02 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:56.408 + '[' 2 -ne 2 ']' 00:05:56.408 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:56.408 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:56.408 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:56.408 +++ basename /dev/fd/62 00:05:56.408 ++ mktemp /tmp/62.XXX 00:05:56.408 + tmp_file_1=/tmp/62.OU2 00:05:56.408 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:56.408 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:56.408 + tmp_file_2=/tmp/spdk_tgt_config.json.F0t 00:05:56.408 + ret=0 00:05:56.408 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:56.976 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:56.976 + diff -u /tmp/62.OU2 /tmp/spdk_tgt_config.json.F0t 00:05:56.976 + ret=1 00:05:56.976 + echo '=== Start of file: /tmp/62.OU2 ===' 00:05:56.976 + cat /tmp/62.OU2 00:05:56.976 + echo '=== End of file: /tmp/62.OU2 ===' 00:05:56.976 + echo '' 00:05:56.976 + echo '=== Start of file: /tmp/spdk_tgt_config.json.F0t ===' 00:05:56.976 + cat /tmp/spdk_tgt_config.json.F0t 00:05:56.976 + echo '=== End of file: /tmp/spdk_tgt_config.json.F0t ===' 00:05:56.976 + echo '' 00:05:56.976 + rm /tmp/62.OU2 /tmp/spdk_tgt_config.json.F0t 00:05:56.976 + exit 1 00:05:56.976 INFO: configuration change detected. 00:05:56.976 13:20:02 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:56.976 13:20:02 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:56.976 13:20:02 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:56.976 13:20:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:56.976 13:20:02 -- common/autotest_common.sh@10 -- # set +x 00:05:56.976 13:20:02 -- json_config/json_config.sh@360 -- # local ret=0 00:05:56.976 13:20:02 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:56.976 13:20:02 -- json_config/json_config.sh@370 -- # [[ -n 68151 ]] 00:05:56.976 13:20:02 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:56.976 13:20:02 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:56.976 13:20:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:56.976 13:20:02 -- common/autotest_common.sh@10 -- # set +x 00:05:56.976 13:20:02 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:56.976 13:20:02 -- json_config/json_config.sh@246 -- # uname -s 00:05:56.976 13:20:02 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:56.976 13:20:02 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:56.976 13:20:02 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:56.976 13:20:02 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:56.976 13:20:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:56.976 13:20:02 -- common/autotest_common.sh@10 -- # set +x 00:05:56.976 13:20:02 -- json_config/json_config.sh@376 -- # killprocess 68151 00:05:56.976 13:20:02 -- common/autotest_common.sh@936 -- # '[' -z 68151 ']' 00:05:56.976 13:20:02 -- common/autotest_common.sh@940 -- # kill -0 68151 00:05:56.976 13:20:02 -- common/autotest_common.sh@941 -- # uname 00:05:56.976 13:20:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:56.976 13:20:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68151 00:05:56.976 13:20:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:56.976 13:20:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:56.976 killing process with pid 68151 00:05:56.976 13:20:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68151' 00:05:56.976 13:20:02 -- common/autotest_common.sh@955 -- # kill 68151 00:05:56.976 13:20:02 -- common/autotest_common.sh@960 -- # wait 68151 00:05:57.235 13:20:02 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:57.235 13:20:02 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:57.235 13:20:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:57.235 13:20:02 -- common/autotest_common.sh@10 -- # set +x 00:05:57.235 13:20:02 -- json_config/json_config.sh@381 -- # return 0 00:05:57.235 INFO: Success 00:05:57.235 13:20:02 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:57.235 00:05:57.235 real 0m7.976s 00:05:57.235 user 0m11.216s 00:05:57.235 sys 0m1.767s 00:05:57.235 13:20:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:57.235 13:20:02 -- common/autotest_common.sh@10 -- # set +x 00:05:57.235 ************************************ 00:05:57.235 END TEST json_config 00:05:57.235 ************************************ 00:05:57.235 13:20:02 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:57.235 13:20:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.235 13:20:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.235 13:20:02 -- common/autotest_common.sh@10 -- # set +x 00:05:57.235 ************************************ 00:05:57.235 START TEST json_config_extra_key 00:05:57.235 ************************************ 00:05:57.235 13:20:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:57.235 13:20:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:57.235 13:20:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:57.235 13:20:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:57.493 13:20:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:57.493 13:20:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:57.493 13:20:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:57.493 13:20:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:57.493 13:20:02 -- scripts/common.sh@335 -- # IFS=.-: 00:05:57.493 13:20:02 -- scripts/common.sh@335 -- # read -ra ver1 00:05:57.493 13:20:02 -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.493 13:20:02 -- scripts/common.sh@336 -- # read -ra ver2 00:05:57.493 13:20:02 -- scripts/common.sh@337 -- # local 'op=<' 00:05:57.493 13:20:02 -- scripts/common.sh@339 -- # ver1_l=2 00:05:57.494 13:20:02 -- scripts/common.sh@340 -- # ver2_l=1 00:05:57.494 13:20:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:57.494 13:20:02 -- scripts/common.sh@343 -- # case "$op" in 00:05:57.494 13:20:02 -- scripts/common.sh@344 -- # : 1 00:05:57.494 13:20:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:57.494 13:20:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.494 13:20:02 -- scripts/common.sh@364 -- # decimal 1 00:05:57.494 13:20:03 -- scripts/common.sh@352 -- # local d=1 00:05:57.494 13:20:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.494 13:20:03 -- scripts/common.sh@354 -- # echo 1 00:05:57.494 13:20:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:57.494 13:20:03 -- scripts/common.sh@365 -- # decimal 2 00:05:57.494 13:20:03 -- scripts/common.sh@352 -- # local d=2 00:05:57.494 13:20:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.494 13:20:03 -- scripts/common.sh@354 -- # echo 2 00:05:57.494 13:20:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:57.494 13:20:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:57.494 13:20:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:57.494 13:20:03 -- scripts/common.sh@367 -- # return 0 00:05:57.494 13:20:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.494 13:20:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:57.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.494 --rc genhtml_branch_coverage=1 00:05:57.494 --rc genhtml_function_coverage=1 00:05:57.494 --rc genhtml_legend=1 00:05:57.494 --rc geninfo_all_blocks=1 00:05:57.494 --rc geninfo_unexecuted_blocks=1 00:05:57.494 00:05:57.494 ' 00:05:57.494 13:20:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:57.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.494 --rc genhtml_branch_coverage=1 00:05:57.494 --rc genhtml_function_coverage=1 00:05:57.494 --rc genhtml_legend=1 00:05:57.494 --rc geninfo_all_blocks=1 00:05:57.494 --rc geninfo_unexecuted_blocks=1 00:05:57.494 00:05:57.494 ' 00:05:57.494 13:20:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:57.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.494 --rc genhtml_branch_coverage=1 00:05:57.494 --rc genhtml_function_coverage=1 00:05:57.494 --rc genhtml_legend=1 00:05:57.494 --rc geninfo_all_blocks=1 00:05:57.494 --rc geninfo_unexecuted_blocks=1 00:05:57.494 00:05:57.494 ' 00:05:57.494 13:20:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:57.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.494 --rc genhtml_branch_coverage=1 00:05:57.494 --rc genhtml_function_coverage=1 00:05:57.494 --rc genhtml_legend=1 00:05:57.494 --rc geninfo_all_blocks=1 00:05:57.494 --rc geninfo_unexecuted_blocks=1 00:05:57.494 00:05:57.494 ' 00:05:57.494 13:20:03 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:57.494 13:20:03 -- nvmf/common.sh@7 -- # uname -s 00:05:57.494 13:20:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.494 13:20:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.494 13:20:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.494 13:20:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.494 13:20:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.494 13:20:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.494 13:20:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.494 13:20:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:57.494 13:20:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.494 13:20:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:57.494 13:20:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:05:57.494 13:20:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:05:57.494 13:20:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.494 13:20:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:57.494 13:20:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:57.494 13:20:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:57.494 13:20:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.494 13:20:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.494 13:20:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.494 13:20:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.494 13:20:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.494 13:20:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.494 13:20:03 -- paths/export.sh@5 -- # export PATH 00:05:57.494 13:20:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.494 13:20:03 -- nvmf/common.sh@46 -- # : 0 00:05:57.494 13:20:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:57.494 13:20:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:57.494 13:20:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:57.494 13:20:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.494 13:20:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.494 13:20:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:57.494 13:20:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:57.494 13:20:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:57.494 13:20:03 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:57.494 13:20:03 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:57.494 13:20:03 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:57.494 13:20:03 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:57.494 13:20:03 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:57.494 13:20:03 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:57.494 13:20:03 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:57.494 13:20:03 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:57.494 13:20:03 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:57.494 INFO: launching applications... 00:05:57.494 13:20:03 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:57.494 13:20:03 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:57.494 13:20:03 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:57.494 13:20:03 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:57.494 13:20:03 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:57.494 13:20:03 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:57.494 13:20:03 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=68323 00:05:57.494 Waiting for target to run... 00:05:57.494 13:20:03 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:57.494 13:20:03 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 68323 /var/tmp/spdk_tgt.sock 00:05:57.494 13:20:03 -- common/autotest_common.sh@829 -- # '[' -z 68323 ']' 00:05:57.494 13:20:03 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:57.494 13:20:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:57.494 13:20:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:57.494 13:20:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:57.494 13:20:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.494 13:20:03 -- common/autotest_common.sh@10 -- # set +x 00:05:57.494 [2024-12-15 13:20:03.099722] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:57.494 [2024-12-15 13:20:03.099818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68323 ] 00:05:58.062 [2024-12-15 13:20:03.535365] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.062 [2024-12-15 13:20:03.577201] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:58.062 [2024-12-15 13:20:03.577332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.628 13:20:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.628 00:05:58.628 13:20:04 -- common/autotest_common.sh@862 -- # return 0 00:05:58.628 13:20:04 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:58.628 INFO: shutting down applications... 00:05:58.628 13:20:04 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:58.628 13:20:04 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:58.628 13:20:04 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:58.628 13:20:04 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:58.628 13:20:04 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 68323 ]] 00:05:58.628 13:20:04 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 68323 00:05:58.628 13:20:04 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:58.628 13:20:04 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:58.628 13:20:04 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68323 00:05:58.628 13:20:04 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:59.195 13:20:04 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:59.195 13:20:04 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:59.195 13:20:04 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68323 00:05:59.195 13:20:04 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:59.195 13:20:04 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:59.195 13:20:04 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:59.195 SPDK target shutdown done 00:05:59.195 13:20:04 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:59.195 Success 00:05:59.195 13:20:04 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:59.195 00:05:59.195 real 0m1.740s 00:05:59.195 user 0m1.613s 00:05:59.195 sys 0m0.452s 00:05:59.195 13:20:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:59.195 13:20:04 -- common/autotest_common.sh@10 -- # set +x 00:05:59.195 ************************************ 00:05:59.195 END TEST json_config_extra_key 00:05:59.195 ************************************ 00:05:59.195 13:20:04 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:59.195 13:20:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.195 13:20:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.195 13:20:04 -- common/autotest_common.sh@10 -- # set +x 00:05:59.195 ************************************ 00:05:59.195 START TEST alias_rpc 00:05:59.195 ************************************ 00:05:59.195 13:20:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:59.195 * Looking for test storage... 00:05:59.195 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:59.195 13:20:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:59.195 13:20:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:59.195 13:20:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:59.195 13:20:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:59.195 13:20:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:59.195 13:20:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:59.195 13:20:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:59.195 13:20:04 -- scripts/common.sh@335 -- # IFS=.-: 00:05:59.195 13:20:04 -- scripts/common.sh@335 -- # read -ra ver1 00:05:59.195 13:20:04 -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.195 13:20:04 -- scripts/common.sh@336 -- # read -ra ver2 00:05:59.195 13:20:04 -- scripts/common.sh@337 -- # local 'op=<' 00:05:59.195 13:20:04 -- scripts/common.sh@339 -- # ver1_l=2 00:05:59.195 13:20:04 -- scripts/common.sh@340 -- # ver2_l=1 00:05:59.196 13:20:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:59.196 13:20:04 -- scripts/common.sh@343 -- # case "$op" in 00:05:59.196 13:20:04 -- scripts/common.sh@344 -- # : 1 00:05:59.196 13:20:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:59.196 13:20:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.196 13:20:04 -- scripts/common.sh@364 -- # decimal 1 00:05:59.196 13:20:04 -- scripts/common.sh@352 -- # local d=1 00:05:59.196 13:20:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.196 13:20:04 -- scripts/common.sh@354 -- # echo 1 00:05:59.196 13:20:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:59.196 13:20:04 -- scripts/common.sh@365 -- # decimal 2 00:05:59.196 13:20:04 -- scripts/common.sh@352 -- # local d=2 00:05:59.196 13:20:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.196 13:20:04 -- scripts/common.sh@354 -- # echo 2 00:05:59.196 13:20:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:59.196 13:20:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:59.196 13:20:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:59.196 13:20:04 -- scripts/common.sh@367 -- # return 0 00:05:59.196 13:20:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.196 13:20:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:59.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.196 --rc genhtml_branch_coverage=1 00:05:59.196 --rc genhtml_function_coverage=1 00:05:59.196 --rc genhtml_legend=1 00:05:59.196 --rc geninfo_all_blocks=1 00:05:59.196 --rc geninfo_unexecuted_blocks=1 00:05:59.196 00:05:59.196 ' 00:05:59.196 13:20:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:59.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.196 --rc genhtml_branch_coverage=1 00:05:59.196 --rc genhtml_function_coverage=1 00:05:59.196 --rc genhtml_legend=1 00:05:59.196 --rc geninfo_all_blocks=1 00:05:59.196 --rc geninfo_unexecuted_blocks=1 00:05:59.196 00:05:59.196 ' 00:05:59.196 13:20:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:59.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.196 --rc genhtml_branch_coverage=1 00:05:59.196 --rc genhtml_function_coverage=1 00:05:59.196 --rc genhtml_legend=1 00:05:59.196 --rc geninfo_all_blocks=1 00:05:59.196 --rc geninfo_unexecuted_blocks=1 00:05:59.196 00:05:59.196 ' 00:05:59.196 13:20:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:59.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.196 --rc genhtml_branch_coverage=1 00:05:59.196 --rc genhtml_function_coverage=1 00:05:59.196 --rc genhtml_legend=1 00:05:59.196 --rc geninfo_all_blocks=1 00:05:59.196 --rc geninfo_unexecuted_blocks=1 00:05:59.196 00:05:59.196 ' 00:05:59.196 13:20:04 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:59.196 13:20:04 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=68412 00:05:59.196 13:20:04 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 68412 00:05:59.196 13:20:04 -- common/autotest_common.sh@829 -- # '[' -z 68412 ']' 00:05:59.196 13:20:04 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.196 13:20:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.196 13:20:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.196 13:20:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.196 13:20:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.196 13:20:04 -- common/autotest_common.sh@10 -- # set +x 00:05:59.455 [2024-12-15 13:20:04.885946] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:59.455 [2024-12-15 13:20:04.886065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68412 ] 00:05:59.455 [2024-12-15 13:20:05.023205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.455 [2024-12-15 13:20:05.076198] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:59.455 [2024-12-15 13:20:05.076380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.390 13:20:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.390 13:20:05 -- common/autotest_common.sh@862 -- # return 0 00:06:00.390 13:20:05 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:00.649 13:20:06 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 68412 00:06:00.649 13:20:06 -- common/autotest_common.sh@936 -- # '[' -z 68412 ']' 00:06:00.649 13:20:06 -- common/autotest_common.sh@940 -- # kill -0 68412 00:06:00.649 13:20:06 -- common/autotest_common.sh@941 -- # uname 00:06:00.649 13:20:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:00.649 13:20:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68412 00:06:00.649 13:20:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:00.649 13:20:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:00.649 killing process with pid 68412 00:06:00.649 13:20:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68412' 00:06:00.649 13:20:06 -- common/autotest_common.sh@955 -- # kill 68412 00:06:00.649 13:20:06 -- common/autotest_common.sh@960 -- # wait 68412 00:06:00.908 00:06:00.908 real 0m1.817s 00:06:00.908 user 0m2.039s 00:06:00.908 sys 0m0.443s 00:06:00.908 13:20:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:00.908 13:20:06 -- common/autotest_common.sh@10 -- # set +x 00:06:00.908 ************************************ 00:06:00.908 END TEST alias_rpc 00:06:00.908 ************************************ 00:06:00.908 13:20:06 -- spdk/autotest.sh@169 -- # [[ 1 -eq 0 ]] 00:06:00.908 13:20:06 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:00.908 13:20:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:00.908 13:20:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.908 13:20:06 -- common/autotest_common.sh@10 -- # set +x 00:06:00.908 ************************************ 00:06:00.908 START TEST dpdk_mem_utility 00:06:00.908 ************************************ 00:06:00.908 13:20:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:00.908 * Looking for test storage... 00:06:00.908 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:00.908 13:20:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:00.908 13:20:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:00.908 13:20:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:01.166 13:20:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:01.166 13:20:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:01.166 13:20:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:01.166 13:20:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:01.166 13:20:06 -- scripts/common.sh@335 -- # IFS=.-: 00:06:01.166 13:20:06 -- scripts/common.sh@335 -- # read -ra ver1 00:06:01.166 13:20:06 -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.167 13:20:06 -- scripts/common.sh@336 -- # read -ra ver2 00:06:01.167 13:20:06 -- scripts/common.sh@337 -- # local 'op=<' 00:06:01.167 13:20:06 -- scripts/common.sh@339 -- # ver1_l=2 00:06:01.167 13:20:06 -- scripts/common.sh@340 -- # ver2_l=1 00:06:01.167 13:20:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:01.167 13:20:06 -- scripts/common.sh@343 -- # case "$op" in 00:06:01.167 13:20:06 -- scripts/common.sh@344 -- # : 1 00:06:01.167 13:20:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:01.167 13:20:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.167 13:20:06 -- scripts/common.sh@364 -- # decimal 1 00:06:01.167 13:20:06 -- scripts/common.sh@352 -- # local d=1 00:06:01.167 13:20:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.167 13:20:06 -- scripts/common.sh@354 -- # echo 1 00:06:01.167 13:20:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:01.167 13:20:06 -- scripts/common.sh@365 -- # decimal 2 00:06:01.167 13:20:06 -- scripts/common.sh@352 -- # local d=2 00:06:01.167 13:20:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.167 13:20:06 -- scripts/common.sh@354 -- # echo 2 00:06:01.167 13:20:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:01.167 13:20:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:01.167 13:20:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:01.167 13:20:06 -- scripts/common.sh@367 -- # return 0 00:06:01.167 13:20:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.167 13:20:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:01.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.167 --rc genhtml_branch_coverage=1 00:06:01.167 --rc genhtml_function_coverage=1 00:06:01.167 --rc genhtml_legend=1 00:06:01.167 --rc geninfo_all_blocks=1 00:06:01.167 --rc geninfo_unexecuted_blocks=1 00:06:01.167 00:06:01.167 ' 00:06:01.167 13:20:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:01.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.167 --rc genhtml_branch_coverage=1 00:06:01.167 --rc genhtml_function_coverage=1 00:06:01.167 --rc genhtml_legend=1 00:06:01.167 --rc geninfo_all_blocks=1 00:06:01.167 --rc geninfo_unexecuted_blocks=1 00:06:01.167 00:06:01.167 ' 00:06:01.167 13:20:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:01.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.167 --rc genhtml_branch_coverage=1 00:06:01.167 --rc genhtml_function_coverage=1 00:06:01.167 --rc genhtml_legend=1 00:06:01.167 --rc geninfo_all_blocks=1 00:06:01.167 --rc geninfo_unexecuted_blocks=1 00:06:01.167 00:06:01.167 ' 00:06:01.167 13:20:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:01.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.167 --rc genhtml_branch_coverage=1 00:06:01.167 --rc genhtml_function_coverage=1 00:06:01.167 --rc genhtml_legend=1 00:06:01.167 --rc geninfo_all_blocks=1 00:06:01.167 --rc geninfo_unexecuted_blocks=1 00:06:01.167 00:06:01.167 ' 00:06:01.167 13:20:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:01.167 13:20:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=68511 00:06:01.167 13:20:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.167 13:20:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 68511 00:06:01.167 13:20:06 -- common/autotest_common.sh@829 -- # '[' -z 68511 ']' 00:06:01.167 13:20:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.167 13:20:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.167 13:20:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.167 13:20:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.167 13:20:06 -- common/autotest_common.sh@10 -- # set +x 00:06:01.167 [2024-12-15 13:20:06.738877] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:01.167 [2024-12-15 13:20:06.739162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68511 ] 00:06:01.426 [2024-12-15 13:20:06.871194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.426 [2024-12-15 13:20:06.923502] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:01.426 [2024-12-15 13:20:06.923968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.363 13:20:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.363 13:20:07 -- common/autotest_common.sh@862 -- # return 0 00:06:02.363 13:20:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:02.363 13:20:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:02.363 13:20:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.363 13:20:07 -- common/autotest_common.sh@10 -- # set +x 00:06:02.363 { 00:06:02.363 "filename": "/tmp/spdk_mem_dump.txt" 00:06:02.363 } 00:06:02.363 13:20:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.363 13:20:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:02.363 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:02.363 1 heaps totaling size 814.000000 MiB 00:06:02.363 size: 814.000000 MiB heap id: 0 00:06:02.363 end heaps---------- 00:06:02.363 8 mempools totaling size 598.116089 MiB 00:06:02.363 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:02.363 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:02.363 size: 84.521057 MiB name: bdev_io_68511 00:06:02.363 size: 51.011292 MiB name: evtpool_68511 00:06:02.363 size: 50.003479 MiB name: msgpool_68511 00:06:02.363 size: 21.763794 MiB name: PDU_Pool 00:06:02.363 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:02.363 size: 0.026123 MiB name: Session_Pool 00:06:02.363 end mempools------- 00:06:02.363 6 memzones totaling size 4.142822 MiB 00:06:02.363 size: 1.000366 MiB name: RG_ring_0_68511 00:06:02.363 size: 1.000366 MiB name: RG_ring_1_68511 00:06:02.363 size: 1.000366 MiB name: RG_ring_4_68511 00:06:02.363 size: 1.000366 MiB name: RG_ring_5_68511 00:06:02.363 size: 0.125366 MiB name: RG_ring_2_68511 00:06:02.363 size: 0.015991 MiB name: RG_ring_3_68511 00:06:02.363 end memzones------- 00:06:02.363 13:20:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:02.363 heap id: 0 total size: 814.000000 MiB number of busy elements: 215 number of free elements: 15 00:06:02.363 list of free elements. size: 12.487488 MiB 00:06:02.363 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:02.363 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:02.363 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:02.363 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:02.363 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:02.363 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:02.363 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:02.363 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:02.363 element at address: 0x200000200000 with size: 0.837219 MiB 00:06:02.363 element at address: 0x20001aa00000 with size: 0.572632 MiB 00:06:02.363 element at address: 0x20000b200000 with size: 0.489990 MiB 00:06:02.363 element at address: 0x200000800000 with size: 0.487061 MiB 00:06:02.363 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:02.363 element at address: 0x200027e00000 with size: 0.398132 MiB 00:06:02.363 element at address: 0x200003a00000 with size: 0.351685 MiB 00:06:02.363 list of standard malloc elements. size: 199.249939 MiB 00:06:02.363 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:02.363 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:02.363 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:02.363 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:02.363 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:02.363 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:02.363 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:02.363 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:02.363 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:02.363 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:02.363 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:02.363 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:02.363 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:02.363 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:02.363 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:02.363 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:02.363 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:02.363 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:02.363 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:02.363 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:02.363 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:02.363 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:02.363 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:02.363 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:02.363 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:02.363 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:02.363 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:02.363 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:02.363 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:02.363 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:02.363 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:02.363 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:02.363 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:02.363 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:02.363 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:02.363 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:02.363 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:02.363 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:02.363 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:02.363 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:02.363 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:02.363 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:02.363 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:02.363 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:02.363 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:02.363 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:02.363 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:02.363 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:02.363 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:02.363 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:02.363 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:02.363 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:02.363 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:02.363 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:02.363 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:02.363 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:02.364 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e65ec0 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e65f80 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6cb80 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:02.364 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:02.364 list of memzone associated elements. size: 602.262573 MiB 00:06:02.364 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:02.364 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:02.364 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:02.364 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:02.364 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:02.364 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_68511_0 00:06:02.364 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:02.364 associated memzone info: size: 48.002930 MiB name: MP_evtpool_68511_0 00:06:02.364 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:02.364 associated memzone info: size: 48.002930 MiB name: MP_msgpool_68511_0 00:06:02.364 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:02.365 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:02.365 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:02.365 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:02.365 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:02.365 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_68511 00:06:02.365 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:02.365 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_68511 00:06:02.365 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:02.365 associated memzone info: size: 1.007996 MiB name: MP_evtpool_68511 00:06:02.365 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:02.365 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:02.365 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:02.365 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:02.365 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:02.365 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:02.365 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:02.365 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:02.365 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:02.365 associated memzone info: size: 1.000366 MiB name: RG_ring_0_68511 00:06:02.365 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:02.365 associated memzone info: size: 1.000366 MiB name: RG_ring_1_68511 00:06:02.365 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:02.365 associated memzone info: size: 1.000366 MiB name: RG_ring_4_68511 00:06:02.365 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:02.365 associated memzone info: size: 1.000366 MiB name: RG_ring_5_68511 00:06:02.365 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:02.365 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_68511 00:06:02.365 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:02.365 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:02.365 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:02.365 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:02.365 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:02.365 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:02.365 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:02.365 associated memzone info: size: 0.125366 MiB name: RG_ring_2_68511 00:06:02.365 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:02.365 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:02.365 element at address: 0x200027e66040 with size: 0.023743 MiB 00:06:02.365 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:02.365 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:02.365 associated memzone info: size: 0.015991 MiB name: RG_ring_3_68511 00:06:02.365 element at address: 0x200027e6c180 with size: 0.002441 MiB 00:06:02.365 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:02.365 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:06:02.365 associated memzone info: size: 0.000183 MiB name: MP_msgpool_68511 00:06:02.365 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:02.365 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_68511 00:06:02.365 element at address: 0x200027e6cc40 with size: 0.000305 MiB 00:06:02.365 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:02.365 13:20:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:02.365 13:20:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 68511 00:06:02.365 13:20:07 -- common/autotest_common.sh@936 -- # '[' -z 68511 ']' 00:06:02.365 13:20:07 -- common/autotest_common.sh@940 -- # kill -0 68511 00:06:02.365 13:20:07 -- common/autotest_common.sh@941 -- # uname 00:06:02.365 13:20:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:02.365 13:20:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68511 00:06:02.365 13:20:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:02.365 13:20:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:02.365 13:20:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68511' 00:06:02.365 killing process with pid 68511 00:06:02.365 13:20:07 -- common/autotest_common.sh@955 -- # kill 68511 00:06:02.365 13:20:07 -- common/autotest_common.sh@960 -- # wait 68511 00:06:02.624 00:06:02.624 real 0m1.747s 00:06:02.624 user 0m1.895s 00:06:02.624 sys 0m0.431s 00:06:02.624 ************************************ 00:06:02.624 END TEST dpdk_mem_utility 00:06:02.624 ************************************ 00:06:02.624 13:20:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:02.624 13:20:08 -- common/autotest_common.sh@10 -- # set +x 00:06:02.624 13:20:08 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:02.624 13:20:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:02.624 13:20:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.624 13:20:08 -- common/autotest_common.sh@10 -- # set +x 00:06:02.624 ************************************ 00:06:02.624 START TEST event 00:06:02.624 ************************************ 00:06:02.624 13:20:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:02.882 * Looking for test storage... 00:06:02.882 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:02.882 13:20:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:02.882 13:20:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:02.882 13:20:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:02.882 13:20:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:02.882 13:20:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:02.882 13:20:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:02.882 13:20:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:02.882 13:20:08 -- scripts/common.sh@335 -- # IFS=.-: 00:06:02.882 13:20:08 -- scripts/common.sh@335 -- # read -ra ver1 00:06:02.882 13:20:08 -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.882 13:20:08 -- scripts/common.sh@336 -- # read -ra ver2 00:06:02.882 13:20:08 -- scripts/common.sh@337 -- # local 'op=<' 00:06:02.882 13:20:08 -- scripts/common.sh@339 -- # ver1_l=2 00:06:02.882 13:20:08 -- scripts/common.sh@340 -- # ver2_l=1 00:06:02.882 13:20:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:02.882 13:20:08 -- scripts/common.sh@343 -- # case "$op" in 00:06:02.882 13:20:08 -- scripts/common.sh@344 -- # : 1 00:06:02.882 13:20:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:02.882 13:20:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.882 13:20:08 -- scripts/common.sh@364 -- # decimal 1 00:06:02.882 13:20:08 -- scripts/common.sh@352 -- # local d=1 00:06:02.882 13:20:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.882 13:20:08 -- scripts/common.sh@354 -- # echo 1 00:06:02.882 13:20:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:02.882 13:20:08 -- scripts/common.sh@365 -- # decimal 2 00:06:02.882 13:20:08 -- scripts/common.sh@352 -- # local d=2 00:06:02.882 13:20:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.882 13:20:08 -- scripts/common.sh@354 -- # echo 2 00:06:02.882 13:20:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:02.882 13:20:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:02.882 13:20:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:02.882 13:20:08 -- scripts/common.sh@367 -- # return 0 00:06:02.883 13:20:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.883 13:20:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:02.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.883 --rc genhtml_branch_coverage=1 00:06:02.883 --rc genhtml_function_coverage=1 00:06:02.883 --rc genhtml_legend=1 00:06:02.883 --rc geninfo_all_blocks=1 00:06:02.883 --rc geninfo_unexecuted_blocks=1 00:06:02.883 00:06:02.883 ' 00:06:02.883 13:20:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:02.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.883 --rc genhtml_branch_coverage=1 00:06:02.883 --rc genhtml_function_coverage=1 00:06:02.883 --rc genhtml_legend=1 00:06:02.883 --rc geninfo_all_blocks=1 00:06:02.883 --rc geninfo_unexecuted_blocks=1 00:06:02.883 00:06:02.883 ' 00:06:02.883 13:20:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:02.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.883 --rc genhtml_branch_coverage=1 00:06:02.883 --rc genhtml_function_coverage=1 00:06:02.883 --rc genhtml_legend=1 00:06:02.883 --rc geninfo_all_blocks=1 00:06:02.883 --rc geninfo_unexecuted_blocks=1 00:06:02.883 00:06:02.883 ' 00:06:02.883 13:20:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:02.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.883 --rc genhtml_branch_coverage=1 00:06:02.883 --rc genhtml_function_coverage=1 00:06:02.883 --rc genhtml_legend=1 00:06:02.883 --rc geninfo_all_blocks=1 00:06:02.883 --rc geninfo_unexecuted_blocks=1 00:06:02.883 00:06:02.883 ' 00:06:02.883 13:20:08 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:02.883 13:20:08 -- bdev/nbd_common.sh@6 -- # set -e 00:06:02.883 13:20:08 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:02.883 13:20:08 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:02.883 13:20:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.883 13:20:08 -- common/autotest_common.sh@10 -- # set +x 00:06:02.883 ************************************ 00:06:02.883 START TEST event_perf 00:06:02.883 ************************************ 00:06:02.883 13:20:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:02.883 Running I/O for 1 seconds...[2024-12-15 13:20:08.566946] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:02.883 [2024-12-15 13:20:08.567172] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68614 ] 00:06:03.141 [2024-12-15 13:20:08.703328] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:03.141 [2024-12-15 13:20:08.756436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.141 [2024-12-15 13:20:08.756563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.141 [2024-12-15 13:20:08.756686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.141 [2024-12-15 13:20:08.756686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.517 Running I/O for 1 seconds... 00:06:04.517 lcore 0: 217020 00:06:04.517 lcore 1: 217019 00:06:04.517 lcore 2: 217019 00:06:04.517 lcore 3: 217019 00:06:04.517 done. 00:06:04.517 00:06:04.517 real 0m1.277s 00:06:04.517 user 0m4.104s 00:06:04.517 sys 0m0.057s 00:06:04.517 13:20:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:04.517 ************************************ 00:06:04.517 END TEST event_perf 00:06:04.517 ************************************ 00:06:04.517 13:20:09 -- common/autotest_common.sh@10 -- # set +x 00:06:04.517 13:20:09 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:04.517 13:20:09 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:04.517 13:20:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.517 13:20:09 -- common/autotest_common.sh@10 -- # set +x 00:06:04.517 ************************************ 00:06:04.517 START TEST event_reactor 00:06:04.517 ************************************ 00:06:04.517 13:20:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:04.517 [2024-12-15 13:20:09.893241] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:04.517 [2024-12-15 13:20:09.893330] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68647 ] 00:06:04.517 [2024-12-15 13:20:10.025440] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.517 [2024-12-15 13:20:10.074389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.485 test_start 00:06:05.485 oneshot 00:06:05.485 tick 100 00:06:05.485 tick 100 00:06:05.485 tick 250 00:06:05.485 tick 100 00:06:05.485 tick 100 00:06:05.485 tick 250 00:06:05.485 tick 500 00:06:05.485 tick 100 00:06:05.485 tick 100 00:06:05.485 tick 100 00:06:05.485 tick 250 00:06:05.485 tick 100 00:06:05.485 tick 100 00:06:05.485 test_end 00:06:05.485 00:06:05.485 real 0m1.247s 00:06:05.485 user 0m1.099s 00:06:05.485 sys 0m0.044s 00:06:05.485 ************************************ 00:06:05.485 END TEST event_reactor 00:06:05.485 ************************************ 00:06:05.485 13:20:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.485 13:20:11 -- common/autotest_common.sh@10 -- # set +x 00:06:05.485 13:20:11 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:05.485 13:20:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:05.485 13:20:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.485 13:20:11 -- common/autotest_common.sh@10 -- # set +x 00:06:05.745 ************************************ 00:06:05.745 START TEST event_reactor_perf 00:06:05.745 ************************************ 00:06:05.745 13:20:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:05.745 [2024-12-15 13:20:11.194856] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:05.745 [2024-12-15 13:20:11.194953] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68677 ] 00:06:05.745 [2024-12-15 13:20:11.332873] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.745 [2024-12-15 13:20:11.391728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.122 test_start 00:06:07.122 test_end 00:06:07.122 Performance: 473327 events per second 00:06:07.122 ************************************ 00:06:07.122 END TEST event_reactor_perf 00:06:07.122 ************************************ 00:06:07.122 00:06:07.122 real 0m1.267s 00:06:07.122 user 0m1.110s 00:06:07.122 sys 0m0.052s 00:06:07.122 13:20:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.122 13:20:12 -- common/autotest_common.sh@10 -- # set +x 00:06:07.122 13:20:12 -- event/event.sh@49 -- # uname -s 00:06:07.122 13:20:12 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:07.122 13:20:12 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:07.122 13:20:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.122 13:20:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.122 13:20:12 -- common/autotest_common.sh@10 -- # set +x 00:06:07.122 ************************************ 00:06:07.122 START TEST event_scheduler 00:06:07.122 ************************************ 00:06:07.122 13:20:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:07.122 * Looking for test storage... 00:06:07.122 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:07.122 13:20:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:07.122 13:20:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:07.122 13:20:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:07.122 13:20:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:07.122 13:20:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:07.122 13:20:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:07.122 13:20:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:07.122 13:20:12 -- scripts/common.sh@335 -- # IFS=.-: 00:06:07.122 13:20:12 -- scripts/common.sh@335 -- # read -ra ver1 00:06:07.122 13:20:12 -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.122 13:20:12 -- scripts/common.sh@336 -- # read -ra ver2 00:06:07.122 13:20:12 -- scripts/common.sh@337 -- # local 'op=<' 00:06:07.122 13:20:12 -- scripts/common.sh@339 -- # ver1_l=2 00:06:07.122 13:20:12 -- scripts/common.sh@340 -- # ver2_l=1 00:06:07.122 13:20:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:07.122 13:20:12 -- scripts/common.sh@343 -- # case "$op" in 00:06:07.122 13:20:12 -- scripts/common.sh@344 -- # : 1 00:06:07.122 13:20:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:07.122 13:20:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.122 13:20:12 -- scripts/common.sh@364 -- # decimal 1 00:06:07.122 13:20:12 -- scripts/common.sh@352 -- # local d=1 00:06:07.122 13:20:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.122 13:20:12 -- scripts/common.sh@354 -- # echo 1 00:06:07.122 13:20:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:07.122 13:20:12 -- scripts/common.sh@365 -- # decimal 2 00:06:07.122 13:20:12 -- scripts/common.sh@352 -- # local d=2 00:06:07.122 13:20:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.122 13:20:12 -- scripts/common.sh@354 -- # echo 2 00:06:07.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.122 13:20:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:07.122 13:20:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:07.122 13:20:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:07.122 13:20:12 -- scripts/common.sh@367 -- # return 0 00:06:07.122 13:20:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.122 13:20:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:07.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.122 --rc genhtml_branch_coverage=1 00:06:07.122 --rc genhtml_function_coverage=1 00:06:07.122 --rc genhtml_legend=1 00:06:07.122 --rc geninfo_all_blocks=1 00:06:07.122 --rc geninfo_unexecuted_blocks=1 00:06:07.122 00:06:07.122 ' 00:06:07.122 13:20:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:07.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.122 --rc genhtml_branch_coverage=1 00:06:07.122 --rc genhtml_function_coverage=1 00:06:07.122 --rc genhtml_legend=1 00:06:07.122 --rc geninfo_all_blocks=1 00:06:07.122 --rc geninfo_unexecuted_blocks=1 00:06:07.122 00:06:07.122 ' 00:06:07.122 13:20:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:07.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.122 --rc genhtml_branch_coverage=1 00:06:07.122 --rc genhtml_function_coverage=1 00:06:07.122 --rc genhtml_legend=1 00:06:07.122 --rc geninfo_all_blocks=1 00:06:07.122 --rc geninfo_unexecuted_blocks=1 00:06:07.122 00:06:07.122 ' 00:06:07.122 13:20:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:07.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.122 --rc genhtml_branch_coverage=1 00:06:07.122 --rc genhtml_function_coverage=1 00:06:07.122 --rc genhtml_legend=1 00:06:07.122 --rc geninfo_all_blocks=1 00:06:07.122 --rc geninfo_unexecuted_blocks=1 00:06:07.122 00:06:07.122 ' 00:06:07.122 13:20:12 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:07.122 13:20:12 -- scheduler/scheduler.sh@35 -- # scheduler_pid=68751 00:06:07.122 13:20:12 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:07.122 13:20:12 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.122 13:20:12 -- scheduler/scheduler.sh@37 -- # waitforlisten 68751 00:06:07.122 13:20:12 -- common/autotest_common.sh@829 -- # '[' -z 68751 ']' 00:06:07.122 13:20:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.122 13:20:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.122 13:20:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.122 13:20:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.122 13:20:12 -- common/autotest_common.sh@10 -- # set +x 00:06:07.123 [2024-12-15 13:20:12.729378] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:07.123 [2024-12-15 13:20:12.730379] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68751 ] 00:06:07.381 [2024-12-15 13:20:12.868772] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:07.381 [2024-12-15 13:20:12.938005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.381 [2024-12-15 13:20:12.938151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.381 [2024-12-15 13:20:12.938288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:07.381 [2024-12-15 13:20:12.938294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.317 13:20:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.317 13:20:13 -- common/autotest_common.sh@862 -- # return 0 00:06:08.317 13:20:13 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:08.317 13:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.317 13:20:13 -- common/autotest_common.sh@10 -- # set +x 00:06:08.317 POWER: Env isn't set yet! 00:06:08.317 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:08.317 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:08.317 POWER: Cannot set governor of lcore 0 to userspace 00:06:08.317 POWER: Attempting to initialise PSTAT power management... 00:06:08.317 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:08.317 POWER: Cannot set governor of lcore 0 to performance 00:06:08.317 POWER: Attempting to initialise AMD PSTATE power management... 00:06:08.317 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:08.317 POWER: Cannot set governor of lcore 0 to userspace 00:06:08.317 POWER: Attempting to initialise CPPC power management... 00:06:08.317 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:08.317 POWER: Cannot set governor of lcore 0 to userspace 00:06:08.317 POWER: Attempting to initialise VM power management... 00:06:08.317 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:08.317 POWER: Unable to set Power Management Environment for lcore 0 00:06:08.317 [2024-12-15 13:20:13.669743] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:08.317 [2024-12-15 13:20:13.669756] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:08.317 [2024-12-15 13:20:13.669765] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:08.317 [2024-12-15 13:20:13.669777] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:08.317 [2024-12-15 13:20:13.669784] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:08.317 [2024-12-15 13:20:13.669791] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:08.317 13:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.317 13:20:13 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:08.317 13:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.317 13:20:13 -- common/autotest_common.sh@10 -- # set +x 00:06:08.317 [2024-12-15 13:20:13.754456] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:08.317 13:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.317 13:20:13 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:08.317 13:20:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:08.317 13:20:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.317 13:20:13 -- common/autotest_common.sh@10 -- # set +x 00:06:08.317 ************************************ 00:06:08.317 START TEST scheduler_create_thread 00:06:08.317 ************************************ 00:06:08.317 13:20:13 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:06:08.317 13:20:13 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:08.317 13:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.317 13:20:13 -- common/autotest_common.sh@10 -- # set +x 00:06:08.317 2 00:06:08.317 13:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.317 13:20:13 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:08.317 13:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.317 13:20:13 -- common/autotest_common.sh@10 -- # set +x 00:06:08.317 3 00:06:08.317 13:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.317 13:20:13 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:08.317 13:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.318 13:20:13 -- common/autotest_common.sh@10 -- # set +x 00:06:08.318 4 00:06:08.318 13:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.318 13:20:13 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:08.318 13:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.318 13:20:13 -- common/autotest_common.sh@10 -- # set +x 00:06:08.318 5 00:06:08.318 13:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.318 13:20:13 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:08.318 13:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.318 13:20:13 -- common/autotest_common.sh@10 -- # set +x 00:06:08.318 6 00:06:08.318 13:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.318 13:20:13 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:08.318 13:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.318 13:20:13 -- common/autotest_common.sh@10 -- # set +x 00:06:08.318 7 00:06:08.318 13:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.318 13:20:13 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:08.318 13:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.318 13:20:13 -- common/autotest_common.sh@10 -- # set +x 00:06:08.318 8 00:06:08.318 13:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.318 13:20:13 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:08.318 13:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.318 13:20:13 -- common/autotest_common.sh@10 -- # set +x 00:06:08.318 9 00:06:08.318 13:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.318 13:20:13 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:08.318 13:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.318 13:20:13 -- common/autotest_common.sh@10 -- # set +x 00:06:08.318 10 00:06:08.318 13:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.318 13:20:13 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:08.318 13:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.318 13:20:13 -- common/autotest_common.sh@10 -- # set +x 00:06:08.318 13:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.318 13:20:13 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:08.318 13:20:13 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:08.318 13:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.318 13:20:13 -- common/autotest_common.sh@10 -- # set +x 00:06:08.318 13:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.318 13:20:13 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:08.318 13:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.318 13:20:13 -- common/autotest_common.sh@10 -- # set +x 00:06:09.693 13:20:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.693 13:20:15 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:09.693 13:20:15 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:09.693 13:20:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.693 13:20:15 -- common/autotest_common.sh@10 -- # set +x 00:06:11.067 ************************************ 00:06:11.067 END TEST scheduler_create_thread 00:06:11.067 ************************************ 00:06:11.067 13:20:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.067 00:06:11.067 real 0m2.614s 00:06:11.067 user 0m0.017s 00:06:11.067 sys 0m0.006s 00:06:11.067 13:20:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.067 13:20:16 -- common/autotest_common.sh@10 -- # set +x 00:06:11.067 13:20:16 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:11.067 13:20:16 -- scheduler/scheduler.sh@46 -- # killprocess 68751 00:06:11.067 13:20:16 -- common/autotest_common.sh@936 -- # '[' -z 68751 ']' 00:06:11.067 13:20:16 -- common/autotest_common.sh@940 -- # kill -0 68751 00:06:11.067 13:20:16 -- common/autotest_common.sh@941 -- # uname 00:06:11.068 13:20:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:11.068 13:20:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68751 00:06:11.068 killing process with pid 68751 00:06:11.068 13:20:16 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:11.068 13:20:16 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:11.068 13:20:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68751' 00:06:11.068 13:20:16 -- common/autotest_common.sh@955 -- # kill 68751 00:06:11.068 13:20:16 -- common/autotest_common.sh@960 -- # wait 68751 00:06:11.326 [2024-12-15 13:20:16.861629] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:11.585 00:06:11.585 real 0m4.555s 00:06:11.585 user 0m8.535s 00:06:11.585 sys 0m0.382s 00:06:11.585 13:20:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.585 13:20:17 -- common/autotest_common.sh@10 -- # set +x 00:06:11.585 ************************************ 00:06:11.585 END TEST event_scheduler 00:06:11.585 ************************************ 00:06:11.585 13:20:17 -- event/event.sh@51 -- # modprobe -n nbd 00:06:11.585 13:20:17 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:11.585 13:20:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:11.585 13:20:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.585 13:20:17 -- common/autotest_common.sh@10 -- # set +x 00:06:11.585 ************************************ 00:06:11.585 START TEST app_repeat 00:06:11.585 ************************************ 00:06:11.585 13:20:17 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:06:11.585 13:20:17 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.585 13:20:17 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.585 13:20:17 -- event/event.sh@13 -- # local nbd_list 00:06:11.585 13:20:17 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.585 13:20:17 -- event/event.sh@14 -- # local bdev_list 00:06:11.585 13:20:17 -- event/event.sh@15 -- # local repeat_times=4 00:06:11.585 13:20:17 -- event/event.sh@17 -- # modprobe nbd 00:06:11.585 Process app_repeat pid: 68863 00:06:11.585 spdk_app_start Round 0 00:06:11.585 13:20:17 -- event/event.sh@19 -- # repeat_pid=68863 00:06:11.585 13:20:17 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.585 13:20:17 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 68863' 00:06:11.585 13:20:17 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:11.585 13:20:17 -- event/event.sh@23 -- # for i in {0..2} 00:06:11.585 13:20:17 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:11.585 13:20:17 -- event/event.sh@25 -- # waitforlisten 68863 /var/tmp/spdk-nbd.sock 00:06:11.585 13:20:17 -- common/autotest_common.sh@829 -- # '[' -z 68863 ']' 00:06:11.585 13:20:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.585 13:20:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.585 13:20:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.585 13:20:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.585 13:20:17 -- common/autotest_common.sh@10 -- # set +x 00:06:11.585 [2024-12-15 13:20:17.139964] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:11.585 [2024-12-15 13:20:17.140060] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68863 ] 00:06:11.844 [2024-12-15 13:20:17.277487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:11.844 [2024-12-15 13:20:17.333786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.844 [2024-12-15 13:20:17.333795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.780 13:20:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.780 13:20:18 -- common/autotest_common.sh@862 -- # return 0 00:06:12.780 13:20:18 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.780 Malloc0 00:06:12.780 13:20:18 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.039 Malloc1 00:06:13.039 13:20:18 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.039 13:20:18 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.039 13:20:18 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.039 13:20:18 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:13.039 13:20:18 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.039 13:20:18 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:13.039 13:20:18 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.039 13:20:18 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.039 13:20:18 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.039 13:20:18 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:13.039 13:20:18 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.039 13:20:18 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:13.039 13:20:18 -- bdev/nbd_common.sh@12 -- # local i 00:06:13.039 13:20:18 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:13.039 13:20:18 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.039 13:20:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:13.606 /dev/nbd0 00:06:13.606 13:20:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:13.606 13:20:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:13.606 13:20:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:13.606 13:20:19 -- common/autotest_common.sh@867 -- # local i 00:06:13.606 13:20:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:13.606 13:20:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:13.606 13:20:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:13.606 13:20:19 -- common/autotest_common.sh@871 -- # break 00:06:13.606 13:20:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:13.606 13:20:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:13.606 13:20:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.606 1+0 records in 00:06:13.606 1+0 records out 00:06:13.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284057 s, 14.4 MB/s 00:06:13.606 13:20:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.606 13:20:19 -- common/autotest_common.sh@884 -- # size=4096 00:06:13.606 13:20:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.606 13:20:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:13.606 13:20:19 -- common/autotest_common.sh@887 -- # return 0 00:06:13.606 13:20:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.606 13:20:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.606 13:20:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:13.606 /dev/nbd1 00:06:13.866 13:20:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:13.866 13:20:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:13.866 13:20:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:13.866 13:20:19 -- common/autotest_common.sh@867 -- # local i 00:06:13.866 13:20:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:13.866 13:20:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:13.866 13:20:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:13.866 13:20:19 -- common/autotest_common.sh@871 -- # break 00:06:13.866 13:20:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:13.866 13:20:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:13.866 13:20:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.866 1+0 records in 00:06:13.866 1+0 records out 00:06:13.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407847 s, 10.0 MB/s 00:06:13.866 13:20:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.866 13:20:19 -- common/autotest_common.sh@884 -- # size=4096 00:06:13.866 13:20:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.866 13:20:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:13.866 13:20:19 -- common/autotest_common.sh@887 -- # return 0 00:06:13.866 13:20:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.866 13:20:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.866 13:20:19 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.866 13:20:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.866 13:20:19 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.866 13:20:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:13.866 { 00:06:13.866 "bdev_name": "Malloc0", 00:06:13.866 "nbd_device": "/dev/nbd0" 00:06:13.866 }, 00:06:13.866 { 00:06:13.866 "bdev_name": "Malloc1", 00:06:13.866 "nbd_device": "/dev/nbd1" 00:06:13.866 } 00:06:13.866 ]' 00:06:13.866 13:20:19 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:13.866 { 00:06:13.866 "bdev_name": "Malloc0", 00:06:13.866 "nbd_device": "/dev/nbd0" 00:06:13.866 }, 00:06:13.866 { 00:06:13.866 "bdev_name": "Malloc1", 00:06:13.866 "nbd_device": "/dev/nbd1" 00:06:13.866 } 00:06:13.866 ]' 00:06:13.866 13:20:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:14.126 /dev/nbd1' 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:14.126 /dev/nbd1' 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@65 -- # count=2 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@95 -- # count=2 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:14.126 256+0 records in 00:06:14.126 256+0 records out 00:06:14.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00807391 s, 130 MB/s 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:14.126 256+0 records in 00:06:14.126 256+0 records out 00:06:14.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233566 s, 44.9 MB/s 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:14.126 256+0 records in 00:06:14.126 256+0 records out 00:06:14.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0271798 s, 38.6 MB/s 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@51 -- # local i 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.126 13:20:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:14.385 13:20:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:14.385 13:20:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:14.385 13:20:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:14.385 13:20:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.385 13:20:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.385 13:20:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:14.385 13:20:19 -- bdev/nbd_common.sh@41 -- # break 00:06:14.385 13:20:19 -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.385 13:20:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.385 13:20:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:14.645 13:20:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:14.645 13:20:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:14.645 13:20:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:14.645 13:20:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.645 13:20:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.645 13:20:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:14.645 13:20:20 -- bdev/nbd_common.sh@41 -- # break 00:06:14.645 13:20:20 -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.645 13:20:20 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.645 13:20:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.645 13:20:20 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.903 13:20:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:14.903 13:20:20 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:14.903 13:20:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.903 13:20:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:14.903 13:20:20 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:14.903 13:20:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.903 13:20:20 -- bdev/nbd_common.sh@65 -- # true 00:06:14.903 13:20:20 -- bdev/nbd_common.sh@65 -- # count=0 00:06:14.903 13:20:20 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:14.903 13:20:20 -- bdev/nbd_common.sh@104 -- # count=0 00:06:14.903 13:20:20 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:14.903 13:20:20 -- bdev/nbd_common.sh@109 -- # return 0 00:06:14.903 13:20:20 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:15.162 13:20:20 -- event/event.sh@35 -- # sleep 3 00:06:15.421 [2024-12-15 13:20:21.006754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.421 [2024-12-15 13:20:21.048353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.421 [2024-12-15 13:20:21.048364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.421 [2024-12-15 13:20:21.100381] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:15.421 [2024-12-15 13:20:21.100484] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:18.717 spdk_app_start Round 1 00:06:18.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:18.717 13:20:23 -- event/event.sh@23 -- # for i in {0..2} 00:06:18.717 13:20:23 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:18.717 13:20:23 -- event/event.sh@25 -- # waitforlisten 68863 /var/tmp/spdk-nbd.sock 00:06:18.717 13:20:23 -- common/autotest_common.sh@829 -- # '[' -z 68863 ']' 00:06:18.717 13:20:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:18.717 13:20:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.717 13:20:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:18.717 13:20:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.717 13:20:23 -- common/autotest_common.sh@10 -- # set +x 00:06:18.717 13:20:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.717 13:20:24 -- common/autotest_common.sh@862 -- # return 0 00:06:18.717 13:20:24 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.717 Malloc0 00:06:18.976 13:20:24 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.976 Malloc1 00:06:18.976 13:20:24 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.976 13:20:24 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.976 13:20:24 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.976 13:20:24 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:18.976 13:20:24 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.976 13:20:24 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:18.976 13:20:24 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.976 13:20:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.976 13:20:24 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.976 13:20:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:18.976 13:20:24 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.976 13:20:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:18.976 13:20:24 -- bdev/nbd_common.sh@12 -- # local i 00:06:18.976 13:20:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:18.976 13:20:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.976 13:20:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:19.543 /dev/nbd0 00:06:19.543 13:20:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.543 13:20:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.543 13:20:24 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:19.543 13:20:24 -- common/autotest_common.sh@867 -- # local i 00:06:19.543 13:20:24 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:19.543 13:20:24 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:19.543 13:20:24 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:19.544 13:20:24 -- common/autotest_common.sh@871 -- # break 00:06:19.544 13:20:24 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:19.544 13:20:24 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:19.544 13:20:24 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.544 1+0 records in 00:06:19.544 1+0 records out 00:06:19.544 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236792 s, 17.3 MB/s 00:06:19.544 13:20:24 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.544 13:20:24 -- common/autotest_common.sh@884 -- # size=4096 00:06:19.544 13:20:24 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.544 13:20:24 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:19.544 13:20:24 -- common/autotest_common.sh@887 -- # return 0 00:06:19.544 13:20:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.544 13:20:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.544 13:20:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:19.802 /dev/nbd1 00:06:19.802 13:20:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.802 13:20:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.802 13:20:25 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:19.802 13:20:25 -- common/autotest_common.sh@867 -- # local i 00:06:19.802 13:20:25 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:19.802 13:20:25 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:19.802 13:20:25 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:19.802 13:20:25 -- common/autotest_common.sh@871 -- # break 00:06:19.802 13:20:25 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:19.802 13:20:25 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:19.802 13:20:25 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.802 1+0 records in 00:06:19.802 1+0 records out 00:06:19.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346804 s, 11.8 MB/s 00:06:19.802 13:20:25 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.802 13:20:25 -- common/autotest_common.sh@884 -- # size=4096 00:06:19.802 13:20:25 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.802 13:20:25 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:19.802 13:20:25 -- common/autotest_common.sh@887 -- # return 0 00:06:19.802 13:20:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.802 13:20:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.802 13:20:25 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.802 13:20:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.802 13:20:25 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:20.061 { 00:06:20.061 "bdev_name": "Malloc0", 00:06:20.061 "nbd_device": "/dev/nbd0" 00:06:20.061 }, 00:06:20.061 { 00:06:20.061 "bdev_name": "Malloc1", 00:06:20.061 "nbd_device": "/dev/nbd1" 00:06:20.061 } 00:06:20.061 ]' 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:20.061 { 00:06:20.061 "bdev_name": "Malloc0", 00:06:20.061 "nbd_device": "/dev/nbd0" 00:06:20.061 }, 00:06:20.061 { 00:06:20.061 "bdev_name": "Malloc1", 00:06:20.061 "nbd_device": "/dev/nbd1" 00:06:20.061 } 00:06:20.061 ]' 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:20.061 /dev/nbd1' 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:20.061 /dev/nbd1' 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@65 -- # count=2 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@95 -- # count=2 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:20.061 256+0 records in 00:06:20.061 256+0 records out 00:06:20.061 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106033 s, 98.9 MB/s 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:20.061 256+0 records in 00:06:20.061 256+0 records out 00:06:20.061 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229439 s, 45.7 MB/s 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:20.061 256+0 records in 00:06:20.061 256+0 records out 00:06:20.061 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026672 s, 39.3 MB/s 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@51 -- # local i 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.061 13:20:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.320 13:20:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.320 13:20:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.320 13:20:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.320 13:20:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.320 13:20:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.320 13:20:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.320 13:20:25 -- bdev/nbd_common.sh@41 -- # break 00:06:20.320 13:20:25 -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.320 13:20:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.320 13:20:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.578 13:20:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.837 13:20:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.837 13:20:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.837 13:20:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.837 13:20:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.837 13:20:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.837 13:20:26 -- bdev/nbd_common.sh@41 -- # break 00:06:20.837 13:20:26 -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.837 13:20:26 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.837 13:20:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.837 13:20:26 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.837 13:20:26 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:20.837 13:20:26 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:20.837 13:20:26 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.096 13:20:26 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:21.096 13:20:26 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.096 13:20:26 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:21.096 13:20:26 -- bdev/nbd_common.sh@65 -- # true 00:06:21.096 13:20:26 -- bdev/nbd_common.sh@65 -- # count=0 00:06:21.096 13:20:26 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:21.096 13:20:26 -- bdev/nbd_common.sh@104 -- # count=0 00:06:21.096 13:20:26 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:21.096 13:20:26 -- bdev/nbd_common.sh@109 -- # return 0 00:06:21.096 13:20:26 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:21.356 13:20:26 -- event/event.sh@35 -- # sleep 3 00:06:21.356 [2024-12-15 13:20:27.018059] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.615 [2024-12-15 13:20:27.059337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.615 [2024-12-15 13:20:27.059349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.615 [2024-12-15 13:20:27.111046] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:21.615 [2024-12-15 13:20:27.111127] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:24.901 spdk_app_start Round 2 00:06:24.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.901 13:20:29 -- event/event.sh@23 -- # for i in {0..2} 00:06:24.901 13:20:29 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:24.901 13:20:29 -- event/event.sh@25 -- # waitforlisten 68863 /var/tmp/spdk-nbd.sock 00:06:24.901 13:20:29 -- common/autotest_common.sh@829 -- # '[' -z 68863 ']' 00:06:24.901 13:20:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.901 13:20:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.901 13:20:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.901 13:20:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.901 13:20:29 -- common/autotest_common.sh@10 -- # set +x 00:06:24.901 13:20:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.901 13:20:30 -- common/autotest_common.sh@862 -- # return 0 00:06:24.901 13:20:30 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.901 Malloc0 00:06:24.901 13:20:30 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.159 Malloc1 00:06:25.159 13:20:30 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:25.159 13:20:30 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.159 13:20:30 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.159 13:20:30 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:25.159 13:20:30 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.159 13:20:30 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:25.159 13:20:30 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:25.159 13:20:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.159 13:20:30 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.159 13:20:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:25.160 13:20:30 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.160 13:20:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:25.160 13:20:30 -- bdev/nbd_common.sh@12 -- # local i 00:06:25.160 13:20:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:25.160 13:20:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.160 13:20:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:25.418 /dev/nbd0 00:06:25.418 13:20:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:25.418 13:20:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:25.418 13:20:31 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:25.418 13:20:31 -- common/autotest_common.sh@867 -- # local i 00:06:25.419 13:20:31 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:25.419 13:20:31 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:25.419 13:20:31 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:25.419 13:20:31 -- common/autotest_common.sh@871 -- # break 00:06:25.419 13:20:31 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:25.419 13:20:31 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:25.419 13:20:31 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:25.419 1+0 records in 00:06:25.419 1+0 records out 00:06:25.419 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250722 s, 16.3 MB/s 00:06:25.419 13:20:31 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:25.419 13:20:31 -- common/autotest_common.sh@884 -- # size=4096 00:06:25.419 13:20:31 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:25.419 13:20:31 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:25.419 13:20:31 -- common/autotest_common.sh@887 -- # return 0 00:06:25.419 13:20:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.419 13:20:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.419 13:20:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:25.677 /dev/nbd1 00:06:25.677 13:20:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:25.677 13:20:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:25.677 13:20:31 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:25.677 13:20:31 -- common/autotest_common.sh@867 -- # local i 00:06:25.677 13:20:31 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:25.677 13:20:31 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:25.677 13:20:31 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:25.677 13:20:31 -- common/autotest_common.sh@871 -- # break 00:06:25.677 13:20:31 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:25.677 13:20:31 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:25.677 13:20:31 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:25.677 1+0 records in 00:06:25.677 1+0 records out 00:06:25.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227603 s, 18.0 MB/s 00:06:25.677 13:20:31 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:25.677 13:20:31 -- common/autotest_common.sh@884 -- # size=4096 00:06:25.677 13:20:31 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:25.677 13:20:31 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:25.677 13:20:31 -- common/autotest_common.sh@887 -- # return 0 00:06:25.677 13:20:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.677 13:20:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.677 13:20:31 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.677 13:20:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.677 13:20:31 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.937 13:20:31 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:25.937 { 00:06:25.937 "bdev_name": "Malloc0", 00:06:25.937 "nbd_device": "/dev/nbd0" 00:06:25.937 }, 00:06:25.937 { 00:06:25.937 "bdev_name": "Malloc1", 00:06:25.937 "nbd_device": "/dev/nbd1" 00:06:25.937 } 00:06:25.937 ]' 00:06:25.937 13:20:31 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:25.937 { 00:06:25.937 "bdev_name": "Malloc0", 00:06:25.937 "nbd_device": "/dev/nbd0" 00:06:25.937 }, 00:06:25.937 { 00:06:25.937 "bdev_name": "Malloc1", 00:06:25.937 "nbd_device": "/dev/nbd1" 00:06:25.937 } 00:06:25.937 ]' 00:06:25.937 13:20:31 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.196 13:20:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:26.196 /dev/nbd1' 00:06:26.196 13:20:31 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:26.196 /dev/nbd1' 00:06:26.196 13:20:31 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.196 13:20:31 -- bdev/nbd_common.sh@65 -- # count=2 00:06:26.196 13:20:31 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:26.196 13:20:31 -- bdev/nbd_common.sh@95 -- # count=2 00:06:26.196 13:20:31 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:26.196 13:20:31 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:26.196 13:20:31 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.196 13:20:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.196 13:20:31 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:26.196 13:20:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:26.196 13:20:31 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:26.197 13:20:31 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:26.197 256+0 records in 00:06:26.197 256+0 records out 00:06:26.197 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00627892 s, 167 MB/s 00:06:26.197 13:20:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.197 13:20:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:26.197 256+0 records in 00:06:26.197 256+0 records out 00:06:26.197 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023017 s, 45.6 MB/s 00:06:26.197 13:20:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.197 13:20:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:26.197 256+0 records in 00:06:26.197 256+0 records out 00:06:26.197 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025524 s, 41.1 MB/s 00:06:26.197 13:20:31 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:26.197 13:20:31 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.197 13:20:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.197 13:20:31 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:26.197 13:20:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:26.197 13:20:31 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:26.197 13:20:31 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:26.197 13:20:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.197 13:20:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:26.197 13:20:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.197 13:20:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:26.197 13:20:31 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:26.197 13:20:31 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:26.197 13:20:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.197 13:20:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.197 13:20:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:26.197 13:20:31 -- bdev/nbd_common.sh@51 -- # local i 00:06:26.197 13:20:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.197 13:20:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:26.456 13:20:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:26.456 13:20:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:26.456 13:20:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:26.456 13:20:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.456 13:20:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.456 13:20:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:26.456 13:20:31 -- bdev/nbd_common.sh@41 -- # break 00:06:26.456 13:20:31 -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.456 13:20:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.456 13:20:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:26.714 13:20:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:26.714 13:20:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:26.715 13:20:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:26.715 13:20:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.715 13:20:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.715 13:20:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:26.715 13:20:32 -- bdev/nbd_common.sh@41 -- # break 00:06:26.715 13:20:32 -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.715 13:20:32 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.715 13:20:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.715 13:20:32 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.973 13:20:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:26.973 13:20:32 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:26.973 13:20:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.973 13:20:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:26.973 13:20:32 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:26.973 13:20:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.973 13:20:32 -- bdev/nbd_common.sh@65 -- # true 00:06:26.973 13:20:32 -- bdev/nbd_common.sh@65 -- # count=0 00:06:26.973 13:20:32 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:26.973 13:20:32 -- bdev/nbd_common.sh@104 -- # count=0 00:06:26.973 13:20:32 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:26.973 13:20:32 -- bdev/nbd_common.sh@109 -- # return 0 00:06:26.973 13:20:32 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:27.232 13:20:32 -- event/event.sh@35 -- # sleep 3 00:06:27.491 [2024-12-15 13:20:33.076494] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:27.491 [2024-12-15 13:20:33.117413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.491 [2024-12-15 13:20:33.117423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.491 [2024-12-15 13:20:33.170924] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:27.491 [2024-12-15 13:20:33.171031] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:30.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:30.814 13:20:35 -- event/event.sh@38 -- # waitforlisten 68863 /var/tmp/spdk-nbd.sock 00:06:30.814 13:20:35 -- common/autotest_common.sh@829 -- # '[' -z 68863 ']' 00:06:30.814 13:20:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:30.814 13:20:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.814 13:20:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:30.814 13:20:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.814 13:20:35 -- common/autotest_common.sh@10 -- # set +x 00:06:30.814 13:20:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.814 13:20:36 -- common/autotest_common.sh@862 -- # return 0 00:06:30.814 13:20:36 -- event/event.sh@39 -- # killprocess 68863 00:06:30.814 13:20:36 -- common/autotest_common.sh@936 -- # '[' -z 68863 ']' 00:06:30.814 13:20:36 -- common/autotest_common.sh@940 -- # kill -0 68863 00:06:30.814 13:20:36 -- common/autotest_common.sh@941 -- # uname 00:06:30.814 13:20:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:30.814 13:20:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68863 00:06:30.814 killing process with pid 68863 00:06:30.814 13:20:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:30.814 13:20:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:30.814 13:20:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68863' 00:06:30.814 13:20:36 -- common/autotest_common.sh@955 -- # kill 68863 00:06:30.814 13:20:36 -- common/autotest_common.sh@960 -- # wait 68863 00:06:30.814 spdk_app_start is called in Round 0. 00:06:30.814 Shutdown signal received, stop current app iteration 00:06:30.814 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:30.814 spdk_app_start is called in Round 1. 00:06:30.814 Shutdown signal received, stop current app iteration 00:06:30.814 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:30.814 spdk_app_start is called in Round 2. 00:06:30.814 Shutdown signal received, stop current app iteration 00:06:30.814 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:30.814 spdk_app_start is called in Round 3. 00:06:30.814 Shutdown signal received, stop current app iteration 00:06:30.814 13:20:36 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:30.814 13:20:36 -- event/event.sh@42 -- # return 0 00:06:30.814 00:06:30.814 real 0m19.280s 00:06:30.814 user 0m43.682s 00:06:30.814 sys 0m2.997s 00:06:30.814 13:20:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:30.814 13:20:36 -- common/autotest_common.sh@10 -- # set +x 00:06:30.814 ************************************ 00:06:30.814 END TEST app_repeat 00:06:30.814 ************************************ 00:06:30.814 13:20:36 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:30.814 13:20:36 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:30.814 13:20:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:30.814 13:20:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.814 13:20:36 -- common/autotest_common.sh@10 -- # set +x 00:06:30.814 ************************************ 00:06:30.814 START TEST cpu_locks 00:06:30.814 ************************************ 00:06:30.814 13:20:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:31.073 * Looking for test storage... 00:06:31.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:31.073 13:20:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:31.073 13:20:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:31.073 13:20:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:31.073 13:20:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:31.073 13:20:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:31.073 13:20:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:31.073 13:20:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:31.073 13:20:36 -- scripts/common.sh@335 -- # IFS=.-: 00:06:31.073 13:20:36 -- scripts/common.sh@335 -- # read -ra ver1 00:06:31.073 13:20:36 -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.073 13:20:36 -- scripts/common.sh@336 -- # read -ra ver2 00:06:31.073 13:20:36 -- scripts/common.sh@337 -- # local 'op=<' 00:06:31.073 13:20:36 -- scripts/common.sh@339 -- # ver1_l=2 00:06:31.073 13:20:36 -- scripts/common.sh@340 -- # ver2_l=1 00:06:31.073 13:20:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:31.073 13:20:36 -- scripts/common.sh@343 -- # case "$op" in 00:06:31.073 13:20:36 -- scripts/common.sh@344 -- # : 1 00:06:31.073 13:20:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:31.073 13:20:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.073 13:20:36 -- scripts/common.sh@364 -- # decimal 1 00:06:31.073 13:20:36 -- scripts/common.sh@352 -- # local d=1 00:06:31.073 13:20:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.073 13:20:36 -- scripts/common.sh@354 -- # echo 1 00:06:31.073 13:20:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:31.073 13:20:36 -- scripts/common.sh@365 -- # decimal 2 00:06:31.073 13:20:36 -- scripts/common.sh@352 -- # local d=2 00:06:31.073 13:20:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.073 13:20:36 -- scripts/common.sh@354 -- # echo 2 00:06:31.073 13:20:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:31.073 13:20:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:31.073 13:20:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:31.073 13:20:36 -- scripts/common.sh@367 -- # return 0 00:06:31.073 13:20:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.073 13:20:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:31.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.073 --rc genhtml_branch_coverage=1 00:06:31.073 --rc genhtml_function_coverage=1 00:06:31.073 --rc genhtml_legend=1 00:06:31.073 --rc geninfo_all_blocks=1 00:06:31.073 --rc geninfo_unexecuted_blocks=1 00:06:31.073 00:06:31.073 ' 00:06:31.073 13:20:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:31.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.073 --rc genhtml_branch_coverage=1 00:06:31.073 --rc genhtml_function_coverage=1 00:06:31.073 --rc genhtml_legend=1 00:06:31.073 --rc geninfo_all_blocks=1 00:06:31.073 --rc geninfo_unexecuted_blocks=1 00:06:31.073 00:06:31.073 ' 00:06:31.073 13:20:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:31.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.073 --rc genhtml_branch_coverage=1 00:06:31.073 --rc genhtml_function_coverage=1 00:06:31.073 --rc genhtml_legend=1 00:06:31.073 --rc geninfo_all_blocks=1 00:06:31.073 --rc geninfo_unexecuted_blocks=1 00:06:31.073 00:06:31.073 ' 00:06:31.073 13:20:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:31.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.073 --rc genhtml_branch_coverage=1 00:06:31.073 --rc genhtml_function_coverage=1 00:06:31.073 --rc genhtml_legend=1 00:06:31.073 --rc geninfo_all_blocks=1 00:06:31.074 --rc geninfo_unexecuted_blocks=1 00:06:31.074 00:06:31.074 ' 00:06:31.074 13:20:36 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:31.074 13:20:36 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:31.074 13:20:36 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:31.074 13:20:36 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:31.074 13:20:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:31.074 13:20:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.074 13:20:36 -- common/autotest_common.sh@10 -- # set +x 00:06:31.074 ************************************ 00:06:31.074 START TEST default_locks 00:06:31.074 ************************************ 00:06:31.074 13:20:36 -- common/autotest_common.sh@1114 -- # default_locks 00:06:31.074 13:20:36 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=69500 00:06:31.074 13:20:36 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.074 13:20:36 -- event/cpu_locks.sh@47 -- # waitforlisten 69500 00:06:31.074 13:20:36 -- common/autotest_common.sh@829 -- # '[' -z 69500 ']' 00:06:31.074 13:20:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.074 13:20:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.074 13:20:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.074 13:20:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.074 13:20:36 -- common/autotest_common.sh@10 -- # set +x 00:06:31.074 [2024-12-15 13:20:36.665634] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:31.074 [2024-12-15 13:20:36.665745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69500 ] 00:06:31.332 [2024-12-15 13:20:36.794990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.332 [2024-12-15 13:20:36.864846] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:31.332 [2024-12-15 13:20:36.865026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.268 13:20:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.268 13:20:37 -- common/autotest_common.sh@862 -- # return 0 00:06:32.268 13:20:37 -- event/cpu_locks.sh@49 -- # locks_exist 69500 00:06:32.268 13:20:37 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:32.268 13:20:37 -- event/cpu_locks.sh@22 -- # lslocks -p 69500 00:06:32.268 13:20:37 -- event/cpu_locks.sh@50 -- # killprocess 69500 00:06:32.268 13:20:37 -- common/autotest_common.sh@936 -- # '[' -z 69500 ']' 00:06:32.268 13:20:37 -- common/autotest_common.sh@940 -- # kill -0 69500 00:06:32.268 13:20:37 -- common/autotest_common.sh@941 -- # uname 00:06:32.268 13:20:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:32.268 13:20:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69500 00:06:32.268 13:20:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:32.268 killing process with pid 69500 00:06:32.268 13:20:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:32.268 13:20:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69500' 00:06:32.268 13:20:37 -- common/autotest_common.sh@955 -- # kill 69500 00:06:32.268 13:20:37 -- common/autotest_common.sh@960 -- # wait 69500 00:06:32.836 13:20:38 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 69500 00:06:32.836 13:20:38 -- common/autotest_common.sh@650 -- # local es=0 00:06:32.836 13:20:38 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69500 00:06:32.836 13:20:38 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:32.836 13:20:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.836 13:20:38 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:32.836 13:20:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.836 13:20:38 -- common/autotest_common.sh@653 -- # waitforlisten 69500 00:06:32.836 13:20:38 -- common/autotest_common.sh@829 -- # '[' -z 69500 ']' 00:06:32.836 13:20:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.836 13:20:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.836 13:20:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.836 13:20:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.836 13:20:38 -- common/autotest_common.sh@10 -- # set +x 00:06:32.836 ERROR: process (pid: 69500) is no longer running 00:06:32.836 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69500) - No such process 00:06:32.836 13:20:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.836 13:20:38 -- common/autotest_common.sh@862 -- # return 1 00:06:32.836 13:20:38 -- common/autotest_common.sh@653 -- # es=1 00:06:32.836 13:20:38 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:32.836 13:20:38 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:32.836 13:20:38 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:32.836 13:20:38 -- event/cpu_locks.sh@54 -- # no_locks 00:06:32.836 13:20:38 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:32.836 13:20:38 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:32.837 13:20:38 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:32.837 00:06:32.837 real 0m1.623s 00:06:32.837 user 0m1.739s 00:06:32.837 sys 0m0.454s 00:06:32.837 13:20:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:32.837 ************************************ 00:06:32.837 END TEST default_locks 00:06:32.837 ************************************ 00:06:32.837 13:20:38 -- common/autotest_common.sh@10 -- # set +x 00:06:32.837 13:20:38 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:32.837 13:20:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:32.837 13:20:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.837 13:20:38 -- common/autotest_common.sh@10 -- # set +x 00:06:32.837 ************************************ 00:06:32.837 START TEST default_locks_via_rpc 00:06:32.837 ************************************ 00:06:32.837 13:20:38 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:06:32.837 13:20:38 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=69559 00:06:32.837 13:20:38 -- event/cpu_locks.sh@63 -- # waitforlisten 69559 00:06:32.837 13:20:38 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:32.837 13:20:38 -- common/autotest_common.sh@829 -- # '[' -z 69559 ']' 00:06:32.837 13:20:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.837 13:20:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.837 13:20:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.837 13:20:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.837 13:20:38 -- common/autotest_common.sh@10 -- # set +x 00:06:32.837 [2024-12-15 13:20:38.347978] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:32.837 [2024-12-15 13:20:38.348076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69559 ] 00:06:32.837 [2024-12-15 13:20:38.484975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.095 [2024-12-15 13:20:38.549447] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:33.096 [2024-12-15 13:20:38.549647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.663 13:20:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.663 13:20:39 -- common/autotest_common.sh@862 -- # return 0 00:06:33.663 13:20:39 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:33.663 13:20:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.663 13:20:39 -- common/autotest_common.sh@10 -- # set +x 00:06:33.663 13:20:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.663 13:20:39 -- event/cpu_locks.sh@67 -- # no_locks 00:06:33.663 13:20:39 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:33.663 13:20:39 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:33.663 13:20:39 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:33.663 13:20:39 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:33.663 13:20:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.663 13:20:39 -- common/autotest_common.sh@10 -- # set +x 00:06:33.663 13:20:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.663 13:20:39 -- event/cpu_locks.sh@71 -- # locks_exist 69559 00:06:33.663 13:20:39 -- event/cpu_locks.sh@22 -- # lslocks -p 69559 00:06:33.663 13:20:39 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.231 13:20:39 -- event/cpu_locks.sh@73 -- # killprocess 69559 00:06:34.231 13:20:39 -- common/autotest_common.sh@936 -- # '[' -z 69559 ']' 00:06:34.231 13:20:39 -- common/autotest_common.sh@940 -- # kill -0 69559 00:06:34.231 13:20:39 -- common/autotest_common.sh@941 -- # uname 00:06:34.231 13:20:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:34.231 13:20:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69559 00:06:34.231 13:20:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:34.231 killing process with pid 69559 00:06:34.231 13:20:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:34.231 13:20:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69559' 00:06:34.231 13:20:39 -- common/autotest_common.sh@955 -- # kill 69559 00:06:34.231 13:20:39 -- common/autotest_common.sh@960 -- # wait 69559 00:06:34.489 00:06:34.489 real 0m1.807s 00:06:34.489 user 0m1.963s 00:06:34.489 sys 0m0.535s 00:06:34.489 ************************************ 00:06:34.489 END TEST default_locks_via_rpc 00:06:34.489 ************************************ 00:06:34.489 13:20:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:34.489 13:20:40 -- common/autotest_common.sh@10 -- # set +x 00:06:34.489 13:20:40 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:34.489 13:20:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:34.489 13:20:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.489 13:20:40 -- common/autotest_common.sh@10 -- # set +x 00:06:34.489 ************************************ 00:06:34.489 START TEST non_locking_app_on_locked_coremask 00:06:34.489 ************************************ 00:06:34.489 13:20:40 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:06:34.489 13:20:40 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=69628 00:06:34.489 13:20:40 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.489 13:20:40 -- event/cpu_locks.sh@81 -- # waitforlisten 69628 /var/tmp/spdk.sock 00:06:34.489 13:20:40 -- common/autotest_common.sh@829 -- # '[' -z 69628 ']' 00:06:34.489 13:20:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.489 13:20:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.489 13:20:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.489 13:20:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.489 13:20:40 -- common/autotest_common.sh@10 -- # set +x 00:06:34.748 [2024-12-15 13:20:40.238418] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:34.748 [2024-12-15 13:20:40.238549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69628 ] 00:06:34.748 [2024-12-15 13:20:40.390552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.006 [2024-12-15 13:20:40.445405] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:35.006 [2024-12-15 13:20:40.445577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.574 13:20:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.574 13:20:41 -- common/autotest_common.sh@862 -- # return 0 00:06:35.574 13:20:41 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:35.574 13:20:41 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=69656 00:06:35.574 13:20:41 -- event/cpu_locks.sh@85 -- # waitforlisten 69656 /var/tmp/spdk2.sock 00:06:35.574 13:20:41 -- common/autotest_common.sh@829 -- # '[' -z 69656 ']' 00:06:35.574 13:20:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.574 13:20:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.574 13:20:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.574 13:20:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.574 13:20:41 -- common/autotest_common.sh@10 -- # set +x 00:06:35.574 [2024-12-15 13:20:41.259911] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:35.574 [2024-12-15 13:20:41.260010] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69656 ] 00:06:35.832 [2024-12-15 13:20:41.398504] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:35.832 [2024-12-15 13:20:41.398552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.091 [2024-12-15 13:20:41.522761] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:36.091 [2024-12-15 13:20:41.522910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.658 13:20:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.658 13:20:42 -- common/autotest_common.sh@862 -- # return 0 00:06:36.658 13:20:42 -- event/cpu_locks.sh@87 -- # locks_exist 69628 00:06:36.658 13:20:42 -- event/cpu_locks.sh@22 -- # lslocks -p 69628 00:06:36.658 13:20:42 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.594 13:20:43 -- event/cpu_locks.sh@89 -- # killprocess 69628 00:06:37.594 13:20:43 -- common/autotest_common.sh@936 -- # '[' -z 69628 ']' 00:06:37.594 13:20:43 -- common/autotest_common.sh@940 -- # kill -0 69628 00:06:37.594 13:20:43 -- common/autotest_common.sh@941 -- # uname 00:06:37.594 13:20:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:37.594 13:20:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69628 00:06:37.594 13:20:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:37.594 killing process with pid 69628 00:06:37.594 13:20:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:37.594 13:20:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69628' 00:06:37.594 13:20:43 -- common/autotest_common.sh@955 -- # kill 69628 00:06:37.594 13:20:43 -- common/autotest_common.sh@960 -- # wait 69628 00:06:38.162 13:20:43 -- event/cpu_locks.sh@90 -- # killprocess 69656 00:06:38.162 13:20:43 -- common/autotest_common.sh@936 -- # '[' -z 69656 ']' 00:06:38.162 13:20:43 -- common/autotest_common.sh@940 -- # kill -0 69656 00:06:38.162 13:20:43 -- common/autotest_common.sh@941 -- # uname 00:06:38.162 13:20:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:38.162 13:20:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69656 00:06:38.162 13:20:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:38.162 13:20:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:38.162 killing process with pid 69656 00:06:38.162 13:20:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69656' 00:06:38.162 13:20:43 -- common/autotest_common.sh@955 -- # kill 69656 00:06:38.162 13:20:43 -- common/autotest_common.sh@960 -- # wait 69656 00:06:38.421 00:06:38.421 real 0m3.941s 00:06:38.421 user 0m4.409s 00:06:38.421 sys 0m1.136s 00:06:38.421 13:20:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:38.421 13:20:44 -- common/autotest_common.sh@10 -- # set +x 00:06:38.421 ************************************ 00:06:38.421 END TEST non_locking_app_on_locked_coremask 00:06:38.421 ************************************ 00:06:38.680 13:20:44 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:38.680 13:20:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:38.680 13:20:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.680 13:20:44 -- common/autotest_common.sh@10 -- # set +x 00:06:38.680 ************************************ 00:06:38.680 START TEST locking_app_on_unlocked_coremask 00:06:38.680 ************************************ 00:06:38.680 13:20:44 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:38.680 13:20:44 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=69735 00:06:38.680 13:20:44 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:38.680 13:20:44 -- event/cpu_locks.sh@99 -- # waitforlisten 69735 /var/tmp/spdk.sock 00:06:38.680 13:20:44 -- common/autotest_common.sh@829 -- # '[' -z 69735 ']' 00:06:38.680 13:20:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.680 13:20:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.680 13:20:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.680 13:20:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.680 13:20:44 -- common/autotest_common.sh@10 -- # set +x 00:06:38.680 [2024-12-15 13:20:44.199843] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:38.680 [2024-12-15 13:20:44.199937] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69735 ] 00:06:38.680 [2024-12-15 13:20:44.337209] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:38.680 [2024-12-15 13:20:44.337258] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.938 [2024-12-15 13:20:44.390510] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:38.938 [2024-12-15 13:20:44.390733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.874 13:20:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.874 13:20:45 -- common/autotest_common.sh@862 -- # return 0 00:06:39.874 13:20:45 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=69763 00:06:39.874 13:20:45 -- event/cpu_locks.sh@103 -- # waitforlisten 69763 /var/tmp/spdk2.sock 00:06:39.874 13:20:45 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:39.874 13:20:45 -- common/autotest_common.sh@829 -- # '[' -z 69763 ']' 00:06:39.874 13:20:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.874 13:20:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.874 13:20:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.874 13:20:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.874 13:20:45 -- common/autotest_common.sh@10 -- # set +x 00:06:39.874 [2024-12-15 13:20:45.272463] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:39.874 [2024-12-15 13:20:45.272577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69763 ] 00:06:39.874 [2024-12-15 13:20:45.412761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.874 [2024-12-15 13:20:45.530348] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:39.874 [2024-12-15 13:20:45.530507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.810 13:20:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.810 13:20:46 -- common/autotest_common.sh@862 -- # return 0 00:06:40.810 13:20:46 -- event/cpu_locks.sh@105 -- # locks_exist 69763 00:06:40.810 13:20:46 -- event/cpu_locks.sh@22 -- # lslocks -p 69763 00:06:40.810 13:20:46 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.069 13:20:46 -- event/cpu_locks.sh@107 -- # killprocess 69735 00:06:41.069 13:20:46 -- common/autotest_common.sh@936 -- # '[' -z 69735 ']' 00:06:41.069 13:20:46 -- common/autotest_common.sh@940 -- # kill -0 69735 00:06:41.069 13:20:46 -- common/autotest_common.sh@941 -- # uname 00:06:41.069 13:20:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:41.069 13:20:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69735 00:06:41.328 killing process with pid 69735 00:06:41.328 13:20:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:41.328 13:20:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:41.328 13:20:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69735' 00:06:41.328 13:20:46 -- common/autotest_common.sh@955 -- # kill 69735 00:06:41.328 13:20:46 -- common/autotest_common.sh@960 -- # wait 69735 00:06:41.936 13:20:47 -- event/cpu_locks.sh@108 -- # killprocess 69763 00:06:41.936 13:20:47 -- common/autotest_common.sh@936 -- # '[' -z 69763 ']' 00:06:41.936 13:20:47 -- common/autotest_common.sh@940 -- # kill -0 69763 00:06:41.936 13:20:47 -- common/autotest_common.sh@941 -- # uname 00:06:41.936 13:20:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:41.936 13:20:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69763 00:06:41.936 killing process with pid 69763 00:06:41.936 13:20:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:41.936 13:20:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:41.936 13:20:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69763' 00:06:41.936 13:20:47 -- common/autotest_common.sh@955 -- # kill 69763 00:06:41.936 13:20:47 -- common/autotest_common.sh@960 -- # wait 69763 00:06:42.195 00:06:42.195 real 0m3.656s 00:06:42.195 user 0m4.085s 00:06:42.195 sys 0m0.981s 00:06:42.195 13:20:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:42.195 ************************************ 00:06:42.195 13:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:42.195 END TEST locking_app_on_unlocked_coremask 00:06:42.195 ************************************ 00:06:42.195 13:20:47 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:42.195 13:20:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:42.195 13:20:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.195 13:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:42.195 ************************************ 00:06:42.195 START TEST locking_app_on_locked_coremask 00:06:42.195 ************************************ 00:06:42.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.195 13:20:47 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:42.195 13:20:47 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=69836 00:06:42.195 13:20:47 -- event/cpu_locks.sh@116 -- # waitforlisten 69836 /var/tmp/spdk.sock 00:06:42.195 13:20:47 -- common/autotest_common.sh@829 -- # '[' -z 69836 ']' 00:06:42.195 13:20:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.195 13:20:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.195 13:20:47 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:42.195 13:20:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.195 13:20:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.195 13:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:42.454 [2024-12-15 13:20:47.901831] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:42.454 [2024-12-15 13:20:47.901922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69836 ] 00:06:42.454 [2024-12-15 13:20:48.041167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.454 [2024-12-15 13:20:48.104795] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:42.454 [2024-12-15 13:20:48.104999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.391 13:20:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.391 13:20:48 -- common/autotest_common.sh@862 -- # return 0 00:06:43.391 13:20:48 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:43.391 13:20:48 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=69864 00:06:43.391 13:20:48 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 69864 /var/tmp/spdk2.sock 00:06:43.391 13:20:48 -- common/autotest_common.sh@650 -- # local es=0 00:06:43.391 13:20:48 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69864 /var/tmp/spdk2.sock 00:06:43.391 13:20:48 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:43.391 13:20:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.391 13:20:48 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:43.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.391 13:20:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.391 13:20:48 -- common/autotest_common.sh@653 -- # waitforlisten 69864 /var/tmp/spdk2.sock 00:06:43.391 13:20:48 -- common/autotest_common.sh@829 -- # '[' -z 69864 ']' 00:06:43.391 13:20:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.391 13:20:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.391 13:20:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.391 13:20:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.391 13:20:48 -- common/autotest_common.sh@10 -- # set +x 00:06:43.391 [2024-12-15 13:20:48.938827] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:43.391 [2024-12-15 13:20:48.938909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69864 ] 00:06:43.391 [2024-12-15 13:20:49.071100] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 69836 has claimed it. 00:06:43.391 [2024-12-15 13:20:49.071183] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:44.328 ERROR: process (pid: 69864) is no longer running 00:06:44.328 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69864) - No such process 00:06:44.328 13:20:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.328 13:20:49 -- common/autotest_common.sh@862 -- # return 1 00:06:44.328 13:20:49 -- common/autotest_common.sh@653 -- # es=1 00:06:44.328 13:20:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:44.328 13:20:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:44.328 13:20:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:44.328 13:20:49 -- event/cpu_locks.sh@122 -- # locks_exist 69836 00:06:44.328 13:20:49 -- event/cpu_locks.sh@22 -- # lslocks -p 69836 00:06:44.328 13:20:49 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.587 13:20:50 -- event/cpu_locks.sh@124 -- # killprocess 69836 00:06:44.587 13:20:50 -- common/autotest_common.sh@936 -- # '[' -z 69836 ']' 00:06:44.587 13:20:50 -- common/autotest_common.sh@940 -- # kill -0 69836 00:06:44.587 13:20:50 -- common/autotest_common.sh@941 -- # uname 00:06:44.587 13:20:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:44.587 13:20:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69836 00:06:44.587 killing process with pid 69836 00:06:44.587 13:20:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:44.587 13:20:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:44.587 13:20:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69836' 00:06:44.587 13:20:50 -- common/autotest_common.sh@955 -- # kill 69836 00:06:44.587 13:20:50 -- common/autotest_common.sh@960 -- # wait 69836 00:06:44.845 ************************************ 00:06:44.845 END TEST locking_app_on_locked_coremask 00:06:44.845 ************************************ 00:06:44.845 00:06:44.845 real 0m2.635s 00:06:44.845 user 0m3.080s 00:06:44.845 sys 0m0.631s 00:06:44.845 13:20:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:44.845 13:20:50 -- common/autotest_common.sh@10 -- # set +x 00:06:44.845 13:20:50 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:44.845 13:20:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:44.845 13:20:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.845 13:20:50 -- common/autotest_common.sh@10 -- # set +x 00:06:44.845 ************************************ 00:06:44.845 START TEST locking_overlapped_coremask 00:06:44.845 ************************************ 00:06:44.845 13:20:50 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:44.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.845 13:20:50 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=69916 00:06:44.845 13:20:50 -- event/cpu_locks.sh@133 -- # waitforlisten 69916 /var/tmp/spdk.sock 00:06:44.845 13:20:50 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:44.845 13:20:50 -- common/autotest_common.sh@829 -- # '[' -z 69916 ']' 00:06:44.845 13:20:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.845 13:20:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.845 13:20:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.845 13:20:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.845 13:20:50 -- common/autotest_common.sh@10 -- # set +x 00:06:45.105 [2024-12-15 13:20:50.590271] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:45.105 [2024-12-15 13:20:50.590536] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69916 ] 00:06:45.105 [2024-12-15 13:20:50.729385] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.105 [2024-12-15 13:20:50.789482] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:45.105 [2024-12-15 13:20:50.790079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.105 [2024-12-15 13:20:50.790220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.105 [2024-12-15 13:20:50.790223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.042 13:20:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.042 13:20:51 -- common/autotest_common.sh@862 -- # return 0 00:06:46.042 13:20:51 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=69946 00:06:46.042 13:20:51 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 69946 /var/tmp/spdk2.sock 00:06:46.042 13:20:51 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:46.042 13:20:51 -- common/autotest_common.sh@650 -- # local es=0 00:06:46.042 13:20:51 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69946 /var/tmp/spdk2.sock 00:06:46.042 13:20:51 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:46.042 13:20:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.042 13:20:51 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:46.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.042 13:20:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.042 13:20:51 -- common/autotest_common.sh@653 -- # waitforlisten 69946 /var/tmp/spdk2.sock 00:06:46.042 13:20:51 -- common/autotest_common.sh@829 -- # '[' -z 69946 ']' 00:06:46.042 13:20:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.042 13:20:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.042 13:20:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.042 13:20:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.042 13:20:51 -- common/autotest_common.sh@10 -- # set +x 00:06:46.042 [2024-12-15 13:20:51.578873] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:46.042 [2024-12-15 13:20:51.579398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69946 ] 00:06:46.042 [2024-12-15 13:20:51.721683] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69916 has claimed it. 00:06:46.042 [2024-12-15 13:20:51.725684] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:46.610 ERROR: process (pid: 69946) is no longer running 00:06:46.610 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69946) - No such process 00:06:46.610 13:20:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.610 13:20:52 -- common/autotest_common.sh@862 -- # return 1 00:06:46.610 13:20:52 -- common/autotest_common.sh@653 -- # es=1 00:06:46.610 13:20:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:46.610 13:20:52 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:46.610 13:20:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:46.610 13:20:52 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:46.610 13:20:52 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:46.610 13:20:52 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:46.610 13:20:52 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:46.610 13:20:52 -- event/cpu_locks.sh@141 -- # killprocess 69916 00:06:46.610 13:20:52 -- common/autotest_common.sh@936 -- # '[' -z 69916 ']' 00:06:46.610 13:20:52 -- common/autotest_common.sh@940 -- # kill -0 69916 00:06:46.610 13:20:52 -- common/autotest_common.sh@941 -- # uname 00:06:46.869 13:20:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:46.869 13:20:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69916 00:06:46.869 13:20:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:46.869 13:20:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:46.869 13:20:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69916' 00:06:46.869 killing process with pid 69916 00:06:46.869 13:20:52 -- common/autotest_common.sh@955 -- # kill 69916 00:06:46.869 13:20:52 -- common/autotest_common.sh@960 -- # wait 69916 00:06:47.128 00:06:47.128 real 0m2.169s 00:06:47.128 user 0m6.127s 00:06:47.128 sys 0m0.434s 00:06:47.128 13:20:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.128 13:20:52 -- common/autotest_common.sh@10 -- # set +x 00:06:47.128 ************************************ 00:06:47.128 END TEST locking_overlapped_coremask 00:06:47.128 ************************************ 00:06:47.128 13:20:52 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:47.128 13:20:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:47.128 13:20:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.128 13:20:52 -- common/autotest_common.sh@10 -- # set +x 00:06:47.128 ************************************ 00:06:47.128 START TEST locking_overlapped_coremask_via_rpc 00:06:47.129 ************************************ 00:06:47.129 13:20:52 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:06:47.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.129 13:20:52 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=69992 00:06:47.129 13:20:52 -- event/cpu_locks.sh@149 -- # waitforlisten 69992 /var/tmp/spdk.sock 00:06:47.129 13:20:52 -- common/autotest_common.sh@829 -- # '[' -z 69992 ']' 00:06:47.129 13:20:52 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:47.129 13:20:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.129 13:20:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.129 13:20:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.129 13:20:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.129 13:20:52 -- common/autotest_common.sh@10 -- # set +x 00:06:47.129 [2024-12-15 13:20:52.797493] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:47.129 [2024-12-15 13:20:52.797582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69992 ] 00:06:47.388 [2024-12-15 13:20:52.929901] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:47.388 [2024-12-15 13:20:52.929934] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.388 [2024-12-15 13:20:53.000463] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:47.388 [2024-12-15 13:20:53.000878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.388 [2024-12-15 13:20:53.001017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.388 [2024-12-15 13:20:53.001020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.325 13:20:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.325 13:20:53 -- common/autotest_common.sh@862 -- # return 0 00:06:48.325 13:20:53 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=70022 00:06:48.325 13:20:53 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:48.325 13:20:53 -- event/cpu_locks.sh@153 -- # waitforlisten 70022 /var/tmp/spdk2.sock 00:06:48.325 13:20:53 -- common/autotest_common.sh@829 -- # '[' -z 70022 ']' 00:06:48.325 13:20:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.325 13:20:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.325 13:20:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.325 13:20:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.325 13:20:53 -- common/autotest_common.sh@10 -- # set +x 00:06:48.325 [2024-12-15 13:20:53.774233] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.325 [2024-12-15 13:20:53.774334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70022 ] 00:06:48.325 [2024-12-15 13:20:53.915572] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.325 [2024-12-15 13:20:53.915629] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.584 [2024-12-15 13:20:54.060549] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:48.584 [2024-12-15 13:20:54.060855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:48.584 [2024-12-15 13:20:54.064709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:48.584 [2024-12-15 13:20:54.064711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.152 13:20:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.152 13:20:54 -- common/autotest_common.sh@862 -- # return 0 00:06:49.152 13:20:54 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:49.152 13:20:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.152 13:20:54 -- common/autotest_common.sh@10 -- # set +x 00:06:49.153 13:20:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.153 13:20:54 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:49.153 13:20:54 -- common/autotest_common.sh@650 -- # local es=0 00:06:49.153 13:20:54 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:49.153 13:20:54 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:49.153 13:20:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:49.153 13:20:54 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:49.153 13:20:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:49.153 13:20:54 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:49.153 13:20:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.153 13:20:54 -- common/autotest_common.sh@10 -- # set +x 00:06:49.153 [2024-12-15 13:20:54.769754] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69992 has claimed it. 00:06:49.153 2024/12/15 13:20:54 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:49.153 request: 00:06:49.153 { 00:06:49.153 "method": "framework_enable_cpumask_locks", 00:06:49.153 "params": {} 00:06:49.153 } 00:06:49.153 Got JSON-RPC error response 00:06:49.153 GoRPCClient: error on JSON-RPC call 00:06:49.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.153 13:20:54 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:49.153 13:20:54 -- common/autotest_common.sh@653 -- # es=1 00:06:49.153 13:20:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:49.153 13:20:54 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:49.153 13:20:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:49.153 13:20:54 -- event/cpu_locks.sh@158 -- # waitforlisten 69992 /var/tmp/spdk.sock 00:06:49.153 13:20:54 -- common/autotest_common.sh@829 -- # '[' -z 69992 ']' 00:06:49.153 13:20:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.153 13:20:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.153 13:20:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.153 13:20:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.153 13:20:54 -- common/autotest_common.sh@10 -- # set +x 00:06:49.412 13:20:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.412 13:20:55 -- common/autotest_common.sh@862 -- # return 0 00:06:49.412 13:20:55 -- event/cpu_locks.sh@159 -- # waitforlisten 70022 /var/tmp/spdk2.sock 00:06:49.412 13:20:55 -- common/autotest_common.sh@829 -- # '[' -z 70022 ']' 00:06:49.412 13:20:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.412 13:20:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.412 13:20:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.412 13:20:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.412 13:20:55 -- common/autotest_common.sh@10 -- # set +x 00:06:49.671 13:20:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.671 13:20:55 -- common/autotest_common.sh@862 -- # return 0 00:06:49.671 13:20:55 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:49.671 13:20:55 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:49.671 13:20:55 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:49.671 13:20:55 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:49.671 00:06:49.671 real 0m2.553s 00:06:49.671 user 0m1.308s 00:06:49.671 sys 0m0.175s 00:06:49.671 13:20:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:49.671 13:20:55 -- common/autotest_common.sh@10 -- # set +x 00:06:49.671 ************************************ 00:06:49.671 END TEST locking_overlapped_coremask_via_rpc 00:06:49.671 ************************************ 00:06:49.671 13:20:55 -- event/cpu_locks.sh@174 -- # cleanup 00:06:49.671 13:20:55 -- event/cpu_locks.sh@15 -- # [[ -z 69992 ]] 00:06:49.671 13:20:55 -- event/cpu_locks.sh@15 -- # killprocess 69992 00:06:49.671 13:20:55 -- common/autotest_common.sh@936 -- # '[' -z 69992 ']' 00:06:49.671 13:20:55 -- common/autotest_common.sh@940 -- # kill -0 69992 00:06:49.671 13:20:55 -- common/autotest_common.sh@941 -- # uname 00:06:49.671 13:20:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:49.671 13:20:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69992 00:06:49.928 killing process with pid 69992 00:06:49.928 13:20:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:49.928 13:20:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:49.928 13:20:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69992' 00:06:49.928 13:20:55 -- common/autotest_common.sh@955 -- # kill 69992 00:06:49.928 13:20:55 -- common/autotest_common.sh@960 -- # wait 69992 00:06:50.187 13:20:55 -- event/cpu_locks.sh@16 -- # [[ -z 70022 ]] 00:06:50.187 13:20:55 -- event/cpu_locks.sh@16 -- # killprocess 70022 00:06:50.187 13:20:55 -- common/autotest_common.sh@936 -- # '[' -z 70022 ']' 00:06:50.187 13:20:55 -- common/autotest_common.sh@940 -- # kill -0 70022 00:06:50.187 13:20:55 -- common/autotest_common.sh@941 -- # uname 00:06:50.187 13:20:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:50.187 13:20:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70022 00:06:50.187 killing process with pid 70022 00:06:50.187 13:20:55 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:50.187 13:20:55 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:50.187 13:20:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70022' 00:06:50.187 13:20:55 -- common/autotest_common.sh@955 -- # kill 70022 00:06:50.187 13:20:55 -- common/autotest_common.sh@960 -- # wait 70022 00:06:50.446 13:20:56 -- event/cpu_locks.sh@18 -- # rm -f 00:06:50.446 13:20:56 -- event/cpu_locks.sh@1 -- # cleanup 00:06:50.446 13:20:56 -- event/cpu_locks.sh@15 -- # [[ -z 69992 ]] 00:06:50.446 13:20:56 -- event/cpu_locks.sh@15 -- # killprocess 69992 00:06:50.446 13:20:56 -- common/autotest_common.sh@936 -- # '[' -z 69992 ']' 00:06:50.446 Process with pid 69992 is not found 00:06:50.446 13:20:56 -- common/autotest_common.sh@940 -- # kill -0 69992 00:06:50.446 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (69992) - No such process 00:06:50.446 13:20:56 -- common/autotest_common.sh@963 -- # echo 'Process with pid 69992 is not found' 00:06:50.446 13:20:56 -- event/cpu_locks.sh@16 -- # [[ -z 70022 ]] 00:06:50.446 13:20:56 -- event/cpu_locks.sh@16 -- # killprocess 70022 00:06:50.446 13:20:56 -- common/autotest_common.sh@936 -- # '[' -z 70022 ']' 00:06:50.446 Process with pid 70022 is not found 00:06:50.446 13:20:56 -- common/autotest_common.sh@940 -- # kill -0 70022 00:06:50.446 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70022) - No such process 00:06:50.446 13:20:56 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70022 is not found' 00:06:50.446 13:20:56 -- event/cpu_locks.sh@18 -- # rm -f 00:06:50.446 00:06:50.446 real 0m19.676s 00:06:50.446 user 0m35.008s 00:06:50.446 sys 0m5.171s 00:06:50.446 13:20:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.446 13:20:56 -- common/autotest_common.sh@10 -- # set +x 00:06:50.446 ************************************ 00:06:50.446 END TEST cpu_locks 00:06:50.446 ************************************ 00:06:50.706 ************************************ 00:06:50.706 END TEST event 00:06:50.706 ************************************ 00:06:50.706 00:06:50.706 real 0m47.839s 00:06:50.706 user 1m33.785s 00:06:50.706 sys 0m8.961s 00:06:50.706 13:20:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.706 13:20:56 -- common/autotest_common.sh@10 -- # set +x 00:06:50.706 13:20:56 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:50.706 13:20:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:50.706 13:20:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.706 13:20:56 -- common/autotest_common.sh@10 -- # set +x 00:06:50.706 ************************************ 00:06:50.706 START TEST thread 00:06:50.706 ************************************ 00:06:50.706 13:20:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:50.706 * Looking for test storage... 00:06:50.706 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:50.706 13:20:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:50.706 13:20:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:50.706 13:20:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:50.706 13:20:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:50.706 13:20:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:50.706 13:20:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:50.706 13:20:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:50.706 13:20:56 -- scripts/common.sh@335 -- # IFS=.-: 00:06:50.706 13:20:56 -- scripts/common.sh@335 -- # read -ra ver1 00:06:50.706 13:20:56 -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.706 13:20:56 -- scripts/common.sh@336 -- # read -ra ver2 00:06:50.706 13:20:56 -- scripts/common.sh@337 -- # local 'op=<' 00:06:50.706 13:20:56 -- scripts/common.sh@339 -- # ver1_l=2 00:06:50.706 13:20:56 -- scripts/common.sh@340 -- # ver2_l=1 00:06:50.706 13:20:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:50.706 13:20:56 -- scripts/common.sh@343 -- # case "$op" in 00:06:50.706 13:20:56 -- scripts/common.sh@344 -- # : 1 00:06:50.706 13:20:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:50.706 13:20:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.706 13:20:56 -- scripts/common.sh@364 -- # decimal 1 00:06:50.706 13:20:56 -- scripts/common.sh@352 -- # local d=1 00:06:50.706 13:20:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.706 13:20:56 -- scripts/common.sh@354 -- # echo 1 00:06:50.706 13:20:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:50.706 13:20:56 -- scripts/common.sh@365 -- # decimal 2 00:06:50.706 13:20:56 -- scripts/common.sh@352 -- # local d=2 00:06:50.706 13:20:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.706 13:20:56 -- scripts/common.sh@354 -- # echo 2 00:06:50.706 13:20:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:50.706 13:20:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:50.706 13:20:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:50.706 13:20:56 -- scripts/common.sh@367 -- # return 0 00:06:50.706 13:20:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.706 13:20:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:50.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.706 --rc genhtml_branch_coverage=1 00:06:50.706 --rc genhtml_function_coverage=1 00:06:50.706 --rc genhtml_legend=1 00:06:50.706 --rc geninfo_all_blocks=1 00:06:50.706 --rc geninfo_unexecuted_blocks=1 00:06:50.706 00:06:50.706 ' 00:06:50.706 13:20:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:50.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.706 --rc genhtml_branch_coverage=1 00:06:50.706 --rc genhtml_function_coverage=1 00:06:50.706 --rc genhtml_legend=1 00:06:50.706 --rc geninfo_all_blocks=1 00:06:50.706 --rc geninfo_unexecuted_blocks=1 00:06:50.706 00:06:50.706 ' 00:06:50.706 13:20:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:50.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.706 --rc genhtml_branch_coverage=1 00:06:50.706 --rc genhtml_function_coverage=1 00:06:50.706 --rc genhtml_legend=1 00:06:50.706 --rc geninfo_all_blocks=1 00:06:50.706 --rc geninfo_unexecuted_blocks=1 00:06:50.706 00:06:50.706 ' 00:06:50.706 13:20:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:50.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.706 --rc genhtml_branch_coverage=1 00:06:50.706 --rc genhtml_function_coverage=1 00:06:50.706 --rc genhtml_legend=1 00:06:50.706 --rc geninfo_all_blocks=1 00:06:50.706 --rc geninfo_unexecuted_blocks=1 00:06:50.706 00:06:50.706 ' 00:06:50.706 13:20:56 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:50.706 13:20:56 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:50.706 13:20:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.706 13:20:56 -- common/autotest_common.sh@10 -- # set +x 00:06:50.706 ************************************ 00:06:50.706 START TEST thread_poller_perf 00:06:50.706 ************************************ 00:06:50.706 13:20:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:50.966 [2024-12-15 13:20:56.401707] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.966 [2024-12-15 13:20:56.401976] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70181 ] 00:06:50.966 [2024-12-15 13:20:56.537495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.966 [2024-12-15 13:20:56.591749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.966 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:52.345 [2024-12-15T13:20:58.035Z] ====================================== 00:06:52.345 [2024-12-15T13:20:58.035Z] busy:2210625078 (cyc) 00:06:52.345 [2024-12-15T13:20:58.035Z] total_run_count: 386000 00:06:52.345 [2024-12-15T13:20:58.035Z] tsc_hz: 2200000000 (cyc) 00:06:52.345 [2024-12-15T13:20:58.035Z] ====================================== 00:06:52.345 [2024-12-15T13:20:58.035Z] poller_cost: 5727 (cyc), 2603 (nsec) 00:06:52.345 00:06:52.345 real 0m1.265s 00:06:52.345 user 0m1.104s 00:06:52.345 sys 0m0.054s 00:06:52.345 13:20:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:52.345 13:20:57 -- common/autotest_common.sh@10 -- # set +x 00:06:52.345 ************************************ 00:06:52.345 END TEST thread_poller_perf 00:06:52.345 ************************************ 00:06:52.345 13:20:57 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:52.345 13:20:57 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:52.345 13:20:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.345 13:20:57 -- common/autotest_common.sh@10 -- # set +x 00:06:52.345 ************************************ 00:06:52.345 START TEST thread_poller_perf 00:06:52.345 ************************************ 00:06:52.345 13:20:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:52.345 [2024-12-15 13:20:57.721938] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.345 [2024-12-15 13:20:57.722520] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70213 ] 00:06:52.345 [2024-12-15 13:20:57.856323] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.345 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:52.345 [2024-12-15 13:20:57.903311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.281 [2024-12-15T13:20:58.971Z] ====================================== 00:06:53.281 [2024-12-15T13:20:58.971Z] busy:2202223870 (cyc) 00:06:53.281 [2024-12-15T13:20:58.971Z] total_run_count: 5321000 00:06:53.281 [2024-12-15T13:20:58.971Z] tsc_hz: 2200000000 (cyc) 00:06:53.281 [2024-12-15T13:20:58.971Z] ====================================== 00:06:53.281 [2024-12-15T13:20:58.971Z] poller_cost: 413 (cyc), 187 (nsec) 00:06:53.281 00:06:53.281 real 0m1.248s 00:06:53.281 user 0m1.092s 00:06:53.282 sys 0m0.048s 00:06:53.282 13:20:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:53.282 ************************************ 00:06:53.282 END TEST thread_poller_perf 00:06:53.282 ************************************ 00:06:53.282 13:20:58 -- common/autotest_common.sh@10 -- # set +x 00:06:53.541 13:20:58 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:53.541 00:06:53.541 real 0m2.804s 00:06:53.541 user 0m2.343s 00:06:53.541 sys 0m0.243s 00:06:53.541 13:20:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:53.541 ************************************ 00:06:53.541 END TEST thread 00:06:53.541 ************************************ 00:06:53.541 13:20:58 -- common/autotest_common.sh@10 -- # set +x 00:06:53.541 13:20:59 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:53.541 13:20:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:53.541 13:20:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.541 13:20:59 -- common/autotest_common.sh@10 -- # set +x 00:06:53.541 ************************************ 00:06:53.541 START TEST accel 00:06:53.541 ************************************ 00:06:53.541 13:20:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:53.541 * Looking for test storage... 00:06:53.541 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:53.541 13:20:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:53.541 13:20:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:53.541 13:20:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:53.541 13:20:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:53.541 13:20:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:53.541 13:20:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:53.541 13:20:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:53.541 13:20:59 -- scripts/common.sh@335 -- # IFS=.-: 00:06:53.541 13:20:59 -- scripts/common.sh@335 -- # read -ra ver1 00:06:53.541 13:20:59 -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.541 13:20:59 -- scripts/common.sh@336 -- # read -ra ver2 00:06:53.541 13:20:59 -- scripts/common.sh@337 -- # local 'op=<' 00:06:53.541 13:20:59 -- scripts/common.sh@339 -- # ver1_l=2 00:06:53.541 13:20:59 -- scripts/common.sh@340 -- # ver2_l=1 00:06:53.541 13:20:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:53.541 13:20:59 -- scripts/common.sh@343 -- # case "$op" in 00:06:53.541 13:20:59 -- scripts/common.sh@344 -- # : 1 00:06:53.541 13:20:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:53.541 13:20:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.541 13:20:59 -- scripts/common.sh@364 -- # decimal 1 00:06:53.541 13:20:59 -- scripts/common.sh@352 -- # local d=1 00:06:53.541 13:20:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.541 13:20:59 -- scripts/common.sh@354 -- # echo 1 00:06:53.541 13:20:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:53.541 13:20:59 -- scripts/common.sh@365 -- # decimal 2 00:06:53.541 13:20:59 -- scripts/common.sh@352 -- # local d=2 00:06:53.541 13:20:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.541 13:20:59 -- scripts/common.sh@354 -- # echo 2 00:06:53.541 13:20:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:53.541 13:20:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:53.541 13:20:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:53.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.541 13:20:59 -- scripts/common.sh@367 -- # return 0 00:06:53.541 13:20:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.541 13:20:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:53.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.541 --rc genhtml_branch_coverage=1 00:06:53.541 --rc genhtml_function_coverage=1 00:06:53.541 --rc genhtml_legend=1 00:06:53.541 --rc geninfo_all_blocks=1 00:06:53.541 --rc geninfo_unexecuted_blocks=1 00:06:53.541 00:06:53.541 ' 00:06:53.541 13:20:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:53.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.541 --rc genhtml_branch_coverage=1 00:06:53.541 --rc genhtml_function_coverage=1 00:06:53.541 --rc genhtml_legend=1 00:06:53.541 --rc geninfo_all_blocks=1 00:06:53.541 --rc geninfo_unexecuted_blocks=1 00:06:53.541 00:06:53.541 ' 00:06:53.541 13:20:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:53.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.541 --rc genhtml_branch_coverage=1 00:06:53.541 --rc genhtml_function_coverage=1 00:06:53.541 --rc genhtml_legend=1 00:06:53.541 --rc geninfo_all_blocks=1 00:06:53.541 --rc geninfo_unexecuted_blocks=1 00:06:53.541 00:06:53.541 ' 00:06:53.541 13:20:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:53.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.541 --rc genhtml_branch_coverage=1 00:06:53.541 --rc genhtml_function_coverage=1 00:06:53.541 --rc genhtml_legend=1 00:06:53.541 --rc geninfo_all_blocks=1 00:06:53.541 --rc geninfo_unexecuted_blocks=1 00:06:53.541 00:06:53.541 ' 00:06:53.541 13:20:59 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:53.541 13:20:59 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:53.541 13:20:59 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:53.541 13:20:59 -- accel/accel.sh@59 -- # spdk_tgt_pid=70290 00:06:53.541 13:20:59 -- accel/accel.sh@60 -- # waitforlisten 70290 00:06:53.541 13:20:59 -- common/autotest_common.sh@829 -- # '[' -z 70290 ']' 00:06:53.541 13:20:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.541 13:20:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.541 13:20:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.541 13:20:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.541 13:20:59 -- accel/accel.sh@58 -- # build_accel_config 00:06:53.541 13:20:59 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:53.541 13:20:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.541 13:20:59 -- common/autotest_common.sh@10 -- # set +x 00:06:53.541 13:20:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.541 13:20:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.542 13:20:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.542 13:20:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.542 13:20:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.542 13:20:59 -- accel/accel.sh@42 -- # jq -r . 00:06:53.805 [2024-12-15 13:20:59.286317] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:53.805 [2024-12-15 13:20:59.286581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70290 ] 00:06:53.805 [2024-12-15 13:20:59.425316] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.805 [2024-12-15 13:20:59.481928] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:53.805 [2024-12-15 13:20:59.482369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.790 13:21:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.790 13:21:00 -- common/autotest_common.sh@862 -- # return 0 00:06:54.790 13:21:00 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:54.790 13:21:00 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:54.790 13:21:00 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:54.790 13:21:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.790 13:21:00 -- common/autotest_common.sh@10 -- # set +x 00:06:54.790 13:21:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.790 13:21:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.790 13:21:00 -- accel/accel.sh@64 -- # IFS== 00:06:54.790 13:21:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:54.790 13:21:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:54.790 13:21:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.790 13:21:00 -- accel/accel.sh@64 -- # IFS== 00:06:54.790 13:21:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:54.790 13:21:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:54.790 13:21:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.790 13:21:00 -- accel/accel.sh@64 -- # IFS== 00:06:54.790 13:21:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:54.790 13:21:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:54.790 13:21:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.790 13:21:00 -- accel/accel.sh@64 -- # IFS== 00:06:54.790 13:21:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:54.790 13:21:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:54.790 13:21:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.790 13:21:00 -- accel/accel.sh@64 -- # IFS== 00:06:54.790 13:21:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:54.790 13:21:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:54.790 13:21:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.790 13:21:00 -- accel/accel.sh@64 -- # IFS== 00:06:54.790 13:21:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:54.790 13:21:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:54.790 13:21:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.790 13:21:00 -- accel/accel.sh@64 -- # IFS== 00:06:54.790 13:21:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:54.790 13:21:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:54.790 13:21:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.790 13:21:00 -- accel/accel.sh@64 -- # IFS== 00:06:54.790 13:21:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:54.790 13:21:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:54.790 13:21:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.790 13:21:00 -- accel/accel.sh@64 -- # IFS== 00:06:54.790 13:21:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:54.790 13:21:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:54.790 13:21:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.790 13:21:00 -- accel/accel.sh@64 -- # IFS== 00:06:54.790 13:21:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:54.790 13:21:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:54.790 13:21:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.791 13:21:00 -- accel/accel.sh@64 -- # IFS== 00:06:54.791 13:21:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:54.791 13:21:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:54.791 13:21:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.791 13:21:00 -- accel/accel.sh@64 -- # IFS== 00:06:54.791 13:21:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:54.791 13:21:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:54.791 13:21:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.791 13:21:00 -- accel/accel.sh@64 -- # IFS== 00:06:54.791 13:21:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:54.791 13:21:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:54.791 13:21:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:54.791 13:21:00 -- accel/accel.sh@64 -- # IFS== 00:06:54.791 13:21:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:54.791 13:21:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:54.791 13:21:00 -- accel/accel.sh@67 -- # killprocess 70290 00:06:54.791 13:21:00 -- common/autotest_common.sh@936 -- # '[' -z 70290 ']' 00:06:54.791 13:21:00 -- common/autotest_common.sh@940 -- # kill -0 70290 00:06:54.791 13:21:00 -- common/autotest_common.sh@941 -- # uname 00:06:54.791 13:21:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:54.791 13:21:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70290 00:06:54.791 13:21:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:54.791 killing process with pid 70290 00:06:54.791 13:21:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:54.791 13:21:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70290' 00:06:54.791 13:21:00 -- common/autotest_common.sh@955 -- # kill 70290 00:06:54.791 13:21:00 -- common/autotest_common.sh@960 -- # wait 70290 00:06:55.050 13:21:00 -- accel/accel.sh@68 -- # trap - ERR 00:06:55.050 13:21:00 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:55.050 13:21:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:55.050 13:21:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.050 13:21:00 -- common/autotest_common.sh@10 -- # set +x 00:06:55.050 13:21:00 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:55.050 13:21:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:55.050 13:21:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.050 13:21:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.050 13:21:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.050 13:21:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.050 13:21:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.050 13:21:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.050 13:21:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.050 13:21:00 -- accel/accel.sh@42 -- # jq -r . 00:06:55.050 13:21:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:55.050 13:21:00 -- common/autotest_common.sh@10 -- # set +x 00:06:55.309 13:21:00 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:55.309 13:21:00 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:55.309 13:21:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.309 13:21:00 -- common/autotest_common.sh@10 -- # set +x 00:06:55.309 ************************************ 00:06:55.309 START TEST accel_missing_filename 00:06:55.309 ************************************ 00:06:55.309 13:21:00 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:55.309 13:21:00 -- common/autotest_common.sh@650 -- # local es=0 00:06:55.309 13:21:00 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:55.309 13:21:00 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:55.309 13:21:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.309 13:21:00 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:55.309 13:21:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.309 13:21:00 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:55.309 13:21:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:55.309 13:21:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.309 13:21:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.309 13:21:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.309 13:21:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.309 13:21:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.309 13:21:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.309 13:21:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.309 13:21:00 -- accel/accel.sh@42 -- # jq -r . 00:06:55.309 [2024-12-15 13:21:00.803331] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.309 [2024-12-15 13:21:00.803429] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70365 ] 00:06:55.309 [2024-12-15 13:21:00.940910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.309 [2024-12-15 13:21:00.989559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.567 [2024-12-15 13:21:01.042512] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:55.567 [2024-12-15 13:21:01.115119] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:55.567 A filename is required. 00:06:55.567 13:21:01 -- common/autotest_common.sh@653 -- # es=234 00:06:55.567 13:21:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:55.567 13:21:01 -- common/autotest_common.sh@662 -- # es=106 00:06:55.567 13:21:01 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:55.567 13:21:01 -- common/autotest_common.sh@670 -- # es=1 00:06:55.567 13:21:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:55.567 00:06:55.567 real 0m0.406s 00:06:55.567 user 0m0.240s 00:06:55.567 sys 0m0.111s 00:06:55.567 13:21:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:55.567 13:21:01 -- common/autotest_common.sh@10 -- # set +x 00:06:55.567 ************************************ 00:06:55.567 END TEST accel_missing_filename 00:06:55.567 ************************************ 00:06:55.567 13:21:01 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:55.567 13:21:01 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:55.567 13:21:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.567 13:21:01 -- common/autotest_common.sh@10 -- # set +x 00:06:55.567 ************************************ 00:06:55.567 START TEST accel_compress_verify 00:06:55.567 ************************************ 00:06:55.568 13:21:01 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:55.568 13:21:01 -- common/autotest_common.sh@650 -- # local es=0 00:06:55.568 13:21:01 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:55.568 13:21:01 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:55.568 13:21:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.568 13:21:01 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:55.568 13:21:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.568 13:21:01 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:55.568 13:21:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:55.568 13:21:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.568 13:21:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.568 13:21:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.568 13:21:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.568 13:21:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.568 13:21:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.568 13:21:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.568 13:21:01 -- accel/accel.sh@42 -- # jq -r . 00:06:55.826 [2024-12-15 13:21:01.261511] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.826 [2024-12-15 13:21:01.261605] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70384 ] 00:06:55.826 [2024-12-15 13:21:01.392302] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.826 [2024-12-15 13:21:01.441875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.826 [2024-12-15 13:21:01.494366] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:56.085 [2024-12-15 13:21:01.568562] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:56.085 00:06:56.085 Compression does not support the verify option, aborting. 00:06:56.085 13:21:01 -- common/autotest_common.sh@653 -- # es=161 00:06:56.085 13:21:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:56.085 13:21:01 -- common/autotest_common.sh@662 -- # es=33 00:06:56.085 13:21:01 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:56.085 13:21:01 -- common/autotest_common.sh@670 -- # es=1 00:06:56.085 13:21:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:56.085 00:06:56.085 real 0m0.393s 00:06:56.085 user 0m0.237s 00:06:56.085 sys 0m0.104s 00:06:56.085 13:21:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:56.085 13:21:01 -- common/autotest_common.sh@10 -- # set +x 00:06:56.085 ************************************ 00:06:56.085 END TEST accel_compress_verify 00:06:56.085 ************************************ 00:06:56.085 13:21:01 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:56.085 13:21:01 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:56.085 13:21:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.085 13:21:01 -- common/autotest_common.sh@10 -- # set +x 00:06:56.085 ************************************ 00:06:56.085 START TEST accel_wrong_workload 00:06:56.085 ************************************ 00:06:56.085 13:21:01 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:56.085 13:21:01 -- common/autotest_common.sh@650 -- # local es=0 00:06:56.085 13:21:01 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:56.085 13:21:01 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:56.085 13:21:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.085 13:21:01 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:56.085 13:21:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.085 13:21:01 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:56.085 13:21:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:56.085 13:21:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.085 13:21:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.085 13:21:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.085 13:21:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.085 13:21:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.085 13:21:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.085 13:21:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.085 13:21:01 -- accel/accel.sh@42 -- # jq -r . 00:06:56.085 Unsupported workload type: foobar 00:06:56.085 [2024-12-15 13:21:01.700311] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:56.085 accel_perf options: 00:06:56.085 [-h help message] 00:06:56.085 [-q queue depth per core] 00:06:56.085 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:56.085 [-T number of threads per core 00:06:56.085 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:56.085 [-t time in seconds] 00:06:56.085 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:56.085 [ dif_verify, , dif_generate, dif_generate_copy 00:06:56.085 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:56.085 [-l for compress/decompress workloads, name of uncompressed input file 00:06:56.085 [-S for crc32c workload, use this seed value (default 0) 00:06:56.085 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:56.085 [-f for fill workload, use this BYTE value (default 255) 00:06:56.085 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:56.085 [-y verify result if this switch is on] 00:06:56.085 [-a tasks to allocate per core (default: same value as -q)] 00:06:56.085 Can be used to spread operations across a wider range of memory. 00:06:56.085 13:21:01 -- common/autotest_common.sh@653 -- # es=1 00:06:56.085 13:21:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:56.085 13:21:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:56.085 13:21:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:56.085 00:06:56.085 real 0m0.029s 00:06:56.085 user 0m0.017s 00:06:56.085 sys 0m0.012s 00:06:56.085 13:21:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:56.085 13:21:01 -- common/autotest_common.sh@10 -- # set +x 00:06:56.085 ************************************ 00:06:56.085 END TEST accel_wrong_workload 00:06:56.085 ************************************ 00:06:56.085 13:21:01 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:56.085 13:21:01 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:56.085 13:21:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.085 13:21:01 -- common/autotest_common.sh@10 -- # set +x 00:06:56.085 ************************************ 00:06:56.085 START TEST accel_negative_buffers 00:06:56.085 ************************************ 00:06:56.085 13:21:01 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:56.085 13:21:01 -- common/autotest_common.sh@650 -- # local es=0 00:06:56.086 13:21:01 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:56.086 13:21:01 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:56.086 13:21:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.086 13:21:01 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:56.086 13:21:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.086 13:21:01 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:56.086 13:21:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:56.086 13:21:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.086 13:21:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.086 13:21:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.086 13:21:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.086 13:21:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.086 13:21:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.086 13:21:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.086 13:21:01 -- accel/accel.sh@42 -- # jq -r . 00:06:56.345 -x option must be non-negative. 00:06:56.345 [2024-12-15 13:21:01.773379] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:56.345 accel_perf options: 00:06:56.345 [-h help message] 00:06:56.345 [-q queue depth per core] 00:06:56.345 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:56.345 [-T number of threads per core 00:06:56.345 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:56.345 [-t time in seconds] 00:06:56.345 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:56.345 [ dif_verify, , dif_generate, dif_generate_copy 00:06:56.345 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:56.345 [-l for compress/decompress workloads, name of uncompressed input file 00:06:56.345 [-S for crc32c workload, use this seed value (default 0) 00:06:56.345 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:56.345 [-f for fill workload, use this BYTE value (default 255) 00:06:56.345 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:56.345 [-y verify result if this switch is on] 00:06:56.345 [-a tasks to allocate per core (default: same value as -q)] 00:06:56.345 Can be used to spread operations across a wider range of memory. 00:06:56.345 13:21:01 -- common/autotest_common.sh@653 -- # es=1 00:06:56.345 13:21:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:56.345 13:21:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:56.345 13:21:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:56.345 00:06:56.345 real 0m0.029s 00:06:56.345 user 0m0.015s 00:06:56.345 sys 0m0.014s 00:06:56.345 13:21:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:56.345 ************************************ 00:06:56.345 END TEST accel_negative_buffers 00:06:56.345 13:21:01 -- common/autotest_common.sh@10 -- # set +x 00:06:56.345 ************************************ 00:06:56.345 13:21:01 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:56.345 13:21:01 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:56.345 13:21:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.345 13:21:01 -- common/autotest_common.sh@10 -- # set +x 00:06:56.345 ************************************ 00:06:56.345 START TEST accel_crc32c 00:06:56.345 ************************************ 00:06:56.345 13:21:01 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:56.345 13:21:01 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.345 13:21:01 -- accel/accel.sh@17 -- # local accel_module 00:06:56.345 13:21:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:56.345 13:21:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:56.345 13:21:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.345 13:21:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.345 13:21:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.345 13:21:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.345 13:21:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.345 13:21:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.345 13:21:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.345 13:21:01 -- accel/accel.sh@42 -- # jq -r . 00:06:56.345 [2024-12-15 13:21:01.849876] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.345 [2024-12-15 13:21:01.849964] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70448 ] 00:06:56.345 [2024-12-15 13:21:01.988227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.604 [2024-12-15 13:21:02.041472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.541 13:21:03 -- accel/accel.sh@18 -- # out=' 00:06:57.541 SPDK Configuration: 00:06:57.541 Core mask: 0x1 00:06:57.541 00:06:57.541 Accel Perf Configuration: 00:06:57.541 Workload Type: crc32c 00:06:57.541 CRC-32C seed: 32 00:06:57.541 Transfer size: 4096 bytes 00:06:57.541 Vector count 1 00:06:57.541 Module: software 00:06:57.541 Queue depth: 32 00:06:57.541 Allocate depth: 32 00:06:57.541 # threads/core: 1 00:06:57.541 Run time: 1 seconds 00:06:57.541 Verify: Yes 00:06:57.541 00:06:57.541 Running for 1 seconds... 00:06:57.541 00:06:57.541 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:57.541 ------------------------------------------------------------------------------------ 00:06:57.541 0,0 529632/s 2068 MiB/s 0 0 00:06:57.541 ==================================================================================== 00:06:57.541 Total 529632/s 2068 MiB/s 0 0' 00:06:57.541 13:21:03 -- accel/accel.sh@20 -- # IFS=: 00:06:57.541 13:21:03 -- accel/accel.sh@20 -- # read -r var val 00:06:57.541 13:21:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:57.541 13:21:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.541 13:21:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:57.541 13:21:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.541 13:21:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.541 13:21:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.800 13:21:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.800 13:21:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.800 13:21:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.800 13:21:03 -- accel/accel.sh@42 -- # jq -r . 00:06:57.800 [2024-12-15 13:21:03.250617] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:57.800 [2024-12-15 13:21:03.250714] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70462 ] 00:06:57.800 [2024-12-15 13:21:03.385302] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.800 [2024-12-15 13:21:03.450798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.059 13:21:03 -- accel/accel.sh@21 -- # val= 00:06:58.059 13:21:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.059 13:21:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.059 13:21:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.059 13:21:03 -- accel/accel.sh@21 -- # val= 00:06:58.059 13:21:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.059 13:21:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.059 13:21:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.059 13:21:03 -- accel/accel.sh@21 -- # val=0x1 00:06:58.059 13:21:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.059 13:21:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.059 13:21:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.059 13:21:03 -- accel/accel.sh@21 -- # val= 00:06:58.059 13:21:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.059 13:21:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.059 13:21:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.059 13:21:03 -- accel/accel.sh@21 -- # val= 00:06:58.059 13:21:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.059 13:21:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.059 13:21:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.059 13:21:03 -- accel/accel.sh@21 -- # val=crc32c 00:06:58.059 13:21:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.059 13:21:03 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:58.059 13:21:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.059 13:21:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.059 13:21:03 -- accel/accel.sh@21 -- # val=32 00:06:58.059 13:21:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.060 13:21:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.060 13:21:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.060 13:21:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.060 13:21:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.060 13:21:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.060 13:21:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.060 13:21:03 -- accel/accel.sh@21 -- # val= 00:06:58.060 13:21:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.060 13:21:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.060 13:21:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.060 13:21:03 -- accel/accel.sh@21 -- # val=software 00:06:58.060 13:21:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.060 13:21:03 -- accel/accel.sh@23 -- # accel_module=software 00:06:58.060 13:21:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.060 13:21:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.060 13:21:03 -- accel/accel.sh@21 -- # val=32 00:06:58.060 13:21:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.060 13:21:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.060 13:21:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.060 13:21:03 -- accel/accel.sh@21 -- # val=32 00:06:58.060 13:21:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.060 13:21:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.060 13:21:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.060 13:21:03 -- accel/accel.sh@21 -- # val=1 00:06:58.060 13:21:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.060 13:21:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.060 13:21:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.060 13:21:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:58.060 13:21:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.060 13:21:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.060 13:21:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.060 13:21:03 -- accel/accel.sh@21 -- # val=Yes 00:06:58.060 13:21:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.060 13:21:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.060 13:21:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.060 13:21:03 -- accel/accel.sh@21 -- # val= 00:06:58.060 13:21:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.060 13:21:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.060 13:21:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.060 13:21:03 -- accel/accel.sh@21 -- # val= 00:06:58.060 13:21:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.060 13:21:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.060 13:21:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.996 13:21:04 -- accel/accel.sh@21 -- # val= 00:06:58.997 13:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.997 13:21:04 -- accel/accel.sh@20 -- # IFS=: 00:06:58.997 13:21:04 -- accel/accel.sh@20 -- # read -r var val 00:06:58.997 13:21:04 -- accel/accel.sh@21 -- # val= 00:06:58.997 13:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.997 13:21:04 -- accel/accel.sh@20 -- # IFS=: 00:06:58.997 13:21:04 -- accel/accel.sh@20 -- # read -r var val 00:06:58.997 13:21:04 -- accel/accel.sh@21 -- # val= 00:06:58.997 13:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.997 13:21:04 -- accel/accel.sh@20 -- # IFS=: 00:06:58.997 13:21:04 -- accel/accel.sh@20 -- # read -r var val 00:06:58.997 13:21:04 -- accel/accel.sh@21 -- # val= 00:06:58.997 13:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.997 13:21:04 -- accel/accel.sh@20 -- # IFS=: 00:06:58.997 13:21:04 -- accel/accel.sh@20 -- # read -r var val 00:06:58.997 13:21:04 -- accel/accel.sh@21 -- # val= 00:06:58.997 13:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.997 13:21:04 -- accel/accel.sh@20 -- # IFS=: 00:06:58.997 13:21:04 -- accel/accel.sh@20 -- # read -r var val 00:06:58.997 13:21:04 -- accel/accel.sh@21 -- # val= 00:06:58.997 13:21:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.997 13:21:04 -- accel/accel.sh@20 -- # IFS=: 00:06:58.997 13:21:04 -- accel/accel.sh@20 -- # read -r var val 00:06:58.997 13:21:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:58.997 13:21:04 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:58.997 13:21:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.997 00:06:58.997 real 0m2.821s 00:06:58.997 user 0m2.398s 00:06:58.997 sys 0m0.223s 00:06:58.997 13:21:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:58.997 13:21:04 -- common/autotest_common.sh@10 -- # set +x 00:06:58.997 ************************************ 00:06:58.997 END TEST accel_crc32c 00:06:58.997 ************************************ 00:06:59.256 13:21:04 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:59.256 13:21:04 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:59.256 13:21:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.256 13:21:04 -- common/autotest_common.sh@10 -- # set +x 00:06:59.256 ************************************ 00:06:59.256 START TEST accel_crc32c_C2 00:06:59.256 ************************************ 00:06:59.256 13:21:04 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:59.256 13:21:04 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.256 13:21:04 -- accel/accel.sh@17 -- # local accel_module 00:06:59.256 13:21:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:59.256 13:21:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:59.256 13:21:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.256 13:21:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.256 13:21:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.256 13:21:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.256 13:21:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.256 13:21:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.256 13:21:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.256 13:21:04 -- accel/accel.sh@42 -- # jq -r . 00:06:59.256 [2024-12-15 13:21:04.719581] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.256 [2024-12-15 13:21:04.719704] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70502 ] 00:06:59.256 [2024-12-15 13:21:04.852522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.256 [2024-12-15 13:21:04.898202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.782 13:21:06 -- accel/accel.sh@18 -- # out=' 00:07:00.782 SPDK Configuration: 00:07:00.782 Core mask: 0x1 00:07:00.782 00:07:00.782 Accel Perf Configuration: 00:07:00.782 Workload Type: crc32c 00:07:00.782 CRC-32C seed: 0 00:07:00.782 Transfer size: 4096 bytes 00:07:00.782 Vector count 2 00:07:00.782 Module: software 00:07:00.782 Queue depth: 32 00:07:00.782 Allocate depth: 32 00:07:00.782 # threads/core: 1 00:07:00.782 Run time: 1 seconds 00:07:00.782 Verify: Yes 00:07:00.782 00:07:00.782 Running for 1 seconds... 00:07:00.782 00:07:00.782 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:00.782 ------------------------------------------------------------------------------------ 00:07:00.782 0,0 428608/s 3348 MiB/s 0 0 00:07:00.782 ==================================================================================== 00:07:00.782 Total 428608/s 1674 MiB/s 0 0' 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.782 13:21:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:00.782 13:21:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.782 13:21:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:00.782 13:21:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.782 13:21:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.782 13:21:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.782 13:21:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.782 13:21:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.782 13:21:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.782 13:21:06 -- accel/accel.sh@42 -- # jq -r . 00:07:00.782 [2024-12-15 13:21:06.102454] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:00.782 [2024-12-15 13:21:06.102562] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70516 ] 00:07:00.782 [2024-12-15 13:21:06.234551] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.782 [2024-12-15 13:21:06.280475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.782 13:21:06 -- accel/accel.sh@21 -- # val= 00:07:00.782 13:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.782 13:21:06 -- accel/accel.sh@21 -- # val= 00:07:00.782 13:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.782 13:21:06 -- accel/accel.sh@21 -- # val=0x1 00:07:00.782 13:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.782 13:21:06 -- accel/accel.sh@21 -- # val= 00:07:00.782 13:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.782 13:21:06 -- accel/accel.sh@21 -- # val= 00:07:00.782 13:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.782 13:21:06 -- accel/accel.sh@21 -- # val=crc32c 00:07:00.782 13:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.782 13:21:06 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.782 13:21:06 -- accel/accel.sh@21 -- # val=0 00:07:00.782 13:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.782 13:21:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:00.782 13:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.782 13:21:06 -- accel/accel.sh@21 -- # val= 00:07:00.782 13:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.782 13:21:06 -- accel/accel.sh@21 -- # val=software 00:07:00.782 13:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.782 13:21:06 -- accel/accel.sh@23 -- # accel_module=software 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.782 13:21:06 -- accel/accel.sh@21 -- # val=32 00:07:00.782 13:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.782 13:21:06 -- accel/accel.sh@21 -- # val=32 00:07:00.782 13:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.782 13:21:06 -- accel/accel.sh@21 -- # val=1 00:07:00.782 13:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.782 13:21:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:00.782 13:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.782 13:21:06 -- accel/accel.sh@21 -- # val=Yes 00:07:00.782 13:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.782 13:21:06 -- accel/accel.sh@21 -- # val= 00:07:00.782 13:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.782 13:21:06 -- accel/accel.sh@21 -- # val= 00:07:00.782 13:21:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.782 13:21:06 -- accel/accel.sh@20 -- # read -r var val 00:07:02.160 13:21:07 -- accel/accel.sh@21 -- # val= 00:07:02.160 13:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.160 13:21:07 -- accel/accel.sh@20 -- # IFS=: 00:07:02.160 13:21:07 -- accel/accel.sh@20 -- # read -r var val 00:07:02.160 13:21:07 -- accel/accel.sh@21 -- # val= 00:07:02.160 13:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.160 13:21:07 -- accel/accel.sh@20 -- # IFS=: 00:07:02.160 13:21:07 -- accel/accel.sh@20 -- # read -r var val 00:07:02.160 13:21:07 -- accel/accel.sh@21 -- # val= 00:07:02.160 13:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.160 13:21:07 -- accel/accel.sh@20 -- # IFS=: 00:07:02.160 13:21:07 -- accel/accel.sh@20 -- # read -r var val 00:07:02.160 13:21:07 -- accel/accel.sh@21 -- # val= 00:07:02.160 13:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.160 13:21:07 -- accel/accel.sh@20 -- # IFS=: 00:07:02.160 13:21:07 -- accel/accel.sh@20 -- # read -r var val 00:07:02.160 13:21:07 -- accel/accel.sh@21 -- # val= 00:07:02.160 13:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.160 13:21:07 -- accel/accel.sh@20 -- # IFS=: 00:07:02.160 13:21:07 -- accel/accel.sh@20 -- # read -r var val 00:07:02.160 13:21:07 -- accel/accel.sh@21 -- # val= 00:07:02.160 13:21:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.160 13:21:07 -- accel/accel.sh@20 -- # IFS=: 00:07:02.160 13:21:07 -- accel/accel.sh@20 -- # read -r var val 00:07:02.160 13:21:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:02.160 13:21:07 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:02.160 13:21:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.160 00:07:02.160 real 0m2.770s 00:07:02.160 user 0m2.362s 00:07:02.160 sys 0m0.211s 00:07:02.160 13:21:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:02.160 ************************************ 00:07:02.160 END TEST accel_crc32c_C2 00:07:02.160 ************************************ 00:07:02.160 13:21:07 -- common/autotest_common.sh@10 -- # set +x 00:07:02.160 13:21:07 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:02.160 13:21:07 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:02.160 13:21:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.160 13:21:07 -- common/autotest_common.sh@10 -- # set +x 00:07:02.160 ************************************ 00:07:02.160 START TEST accel_copy 00:07:02.160 ************************************ 00:07:02.160 13:21:07 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:07:02.160 13:21:07 -- accel/accel.sh@16 -- # local accel_opc 00:07:02.160 13:21:07 -- accel/accel.sh@17 -- # local accel_module 00:07:02.160 13:21:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:07:02.160 13:21:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:02.160 13:21:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.160 13:21:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.160 13:21:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.160 13:21:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.160 13:21:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.160 13:21:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.160 13:21:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.160 13:21:07 -- accel/accel.sh@42 -- # jq -r . 00:07:02.160 [2024-12-15 13:21:07.537896] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:02.160 [2024-12-15 13:21:07.537990] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70551 ] 00:07:02.160 [2024-12-15 13:21:07.675763] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.160 [2024-12-15 13:21:07.721967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.537 13:21:08 -- accel/accel.sh@18 -- # out=' 00:07:03.537 SPDK Configuration: 00:07:03.537 Core mask: 0x1 00:07:03.537 00:07:03.537 Accel Perf Configuration: 00:07:03.537 Workload Type: copy 00:07:03.537 Transfer size: 4096 bytes 00:07:03.537 Vector count 1 00:07:03.537 Module: software 00:07:03.537 Queue depth: 32 00:07:03.537 Allocate depth: 32 00:07:03.537 # threads/core: 1 00:07:03.537 Run time: 1 seconds 00:07:03.537 Verify: Yes 00:07:03.537 00:07:03.537 Running for 1 seconds... 00:07:03.537 00:07:03.537 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:03.537 ------------------------------------------------------------------------------------ 00:07:03.537 0,0 392000/s 1531 MiB/s 0 0 00:07:03.537 ==================================================================================== 00:07:03.537 Total 392000/s 1531 MiB/s 0 0' 00:07:03.537 13:21:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:03.537 13:21:08 -- accel/accel.sh@20 -- # IFS=: 00:07:03.537 13:21:08 -- accel/accel.sh@20 -- # read -r var val 00:07:03.537 13:21:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:03.537 13:21:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.537 13:21:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.537 13:21:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.537 13:21:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.537 13:21:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.537 13:21:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.537 13:21:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.538 13:21:08 -- accel/accel.sh@42 -- # jq -r . 00:07:03.538 [2024-12-15 13:21:08.916117] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:03.538 [2024-12-15 13:21:08.916224] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70570 ] 00:07:03.538 [2024-12-15 13:21:09.043116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.538 [2024-12-15 13:21:09.089367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.538 13:21:09 -- accel/accel.sh@21 -- # val= 00:07:03.538 13:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.538 13:21:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.538 13:21:09 -- accel/accel.sh@20 -- # read -r var val 00:07:03.538 13:21:09 -- accel/accel.sh@21 -- # val= 00:07:03.538 13:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.538 13:21:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.538 13:21:09 -- accel/accel.sh@20 -- # read -r var val 00:07:03.538 13:21:09 -- accel/accel.sh@21 -- # val=0x1 00:07:03.538 13:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.538 13:21:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.538 13:21:09 -- accel/accel.sh@20 -- # read -r var val 00:07:03.538 13:21:09 -- accel/accel.sh@21 -- # val= 00:07:03.538 13:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.538 13:21:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.538 13:21:09 -- accel/accel.sh@20 -- # read -r var val 00:07:03.538 13:21:09 -- accel/accel.sh@21 -- # val= 00:07:03.538 13:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.538 13:21:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.538 13:21:09 -- accel/accel.sh@20 -- # read -r var val 00:07:03.538 13:21:09 -- accel/accel.sh@21 -- # val=copy 00:07:03.538 13:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.538 13:21:09 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:03.538 13:21:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.538 13:21:09 -- accel/accel.sh@20 -- # read -r var val 00:07:03.538 13:21:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:03.538 13:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.538 13:21:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.538 13:21:09 -- accel/accel.sh@20 -- # read -r var val 00:07:03.538 13:21:09 -- accel/accel.sh@21 -- # val= 00:07:03.538 13:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.538 13:21:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.538 13:21:09 -- accel/accel.sh@20 -- # read -r var val 00:07:03.538 13:21:09 -- accel/accel.sh@21 -- # val=software 00:07:03.538 13:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.538 13:21:09 -- accel/accel.sh@23 -- # accel_module=software 00:07:03.538 13:21:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.538 13:21:09 -- accel/accel.sh@20 -- # read -r var val 00:07:03.538 13:21:09 -- accel/accel.sh@21 -- # val=32 00:07:03.538 13:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.538 13:21:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.538 13:21:09 -- accel/accel.sh@20 -- # read -r var val 00:07:03.538 13:21:09 -- accel/accel.sh@21 -- # val=32 00:07:03.538 13:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.538 13:21:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.538 13:21:09 -- accel/accel.sh@20 -- # read -r var val 00:07:03.538 13:21:09 -- accel/accel.sh@21 -- # val=1 00:07:03.538 13:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.538 13:21:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.538 13:21:09 -- accel/accel.sh@20 -- # read -r var val 00:07:03.538 13:21:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:03.538 13:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.538 13:21:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.538 13:21:09 -- accel/accel.sh@20 -- # read -r var val 00:07:03.538 13:21:09 -- accel/accel.sh@21 -- # val=Yes 00:07:03.538 13:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.538 13:21:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.538 13:21:09 -- accel/accel.sh@20 -- # read -r var val 00:07:03.538 13:21:09 -- accel/accel.sh@21 -- # val= 00:07:03.538 13:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.538 13:21:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.538 13:21:09 -- accel/accel.sh@20 -- # read -r var val 00:07:03.538 13:21:09 -- accel/accel.sh@21 -- # val= 00:07:03.538 13:21:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.538 13:21:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.538 13:21:09 -- accel/accel.sh@20 -- # read -r var val 00:07:04.916 13:21:10 -- accel/accel.sh@21 -- # val= 00:07:04.916 13:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.916 13:21:10 -- accel/accel.sh@20 -- # IFS=: 00:07:04.916 13:21:10 -- accel/accel.sh@20 -- # read -r var val 00:07:04.916 13:21:10 -- accel/accel.sh@21 -- # val= 00:07:04.916 13:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.916 13:21:10 -- accel/accel.sh@20 -- # IFS=: 00:07:04.916 13:21:10 -- accel/accel.sh@20 -- # read -r var val 00:07:04.916 13:21:10 -- accel/accel.sh@21 -- # val= 00:07:04.916 13:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.916 13:21:10 -- accel/accel.sh@20 -- # IFS=: 00:07:04.916 13:21:10 -- accel/accel.sh@20 -- # read -r var val 00:07:04.916 13:21:10 -- accel/accel.sh@21 -- # val= 00:07:04.916 13:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.916 13:21:10 -- accel/accel.sh@20 -- # IFS=: 00:07:04.916 13:21:10 -- accel/accel.sh@20 -- # read -r var val 00:07:04.916 13:21:10 -- accel/accel.sh@21 -- # val= 00:07:04.916 13:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.916 13:21:10 -- accel/accel.sh@20 -- # IFS=: 00:07:04.916 13:21:10 -- accel/accel.sh@20 -- # read -r var val 00:07:04.916 13:21:10 -- accel/accel.sh@21 -- # val= 00:07:04.916 13:21:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.916 13:21:10 -- accel/accel.sh@20 -- # IFS=: 00:07:04.916 13:21:10 -- accel/accel.sh@20 -- # read -r var val 00:07:04.916 13:21:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:04.916 13:21:10 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:04.916 13:21:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.916 00:07:04.916 real 0m2.756s 00:07:04.916 user 0m2.364s 00:07:04.916 sys 0m0.194s 00:07:04.916 13:21:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:04.916 13:21:10 -- common/autotest_common.sh@10 -- # set +x 00:07:04.916 ************************************ 00:07:04.916 END TEST accel_copy 00:07:04.916 ************************************ 00:07:04.916 13:21:10 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:04.916 13:21:10 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:04.916 13:21:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.916 13:21:10 -- common/autotest_common.sh@10 -- # set +x 00:07:04.916 ************************************ 00:07:04.916 START TEST accel_fill 00:07:04.916 ************************************ 00:07:04.916 13:21:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:04.916 13:21:10 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.916 13:21:10 -- accel/accel.sh@17 -- # local accel_module 00:07:04.916 13:21:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:04.916 13:21:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:04.916 13:21:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.916 13:21:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.916 13:21:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.916 13:21:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.916 13:21:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.916 13:21:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.916 13:21:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.916 13:21:10 -- accel/accel.sh@42 -- # jq -r . 00:07:04.916 [2024-12-15 13:21:10.345024] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:04.916 [2024-12-15 13:21:10.345553] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70599 ] 00:07:04.916 [2024-12-15 13:21:10.483750] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.916 [2024-12-15 13:21:10.538090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.293 13:21:11 -- accel/accel.sh@18 -- # out=' 00:07:06.293 SPDK Configuration: 00:07:06.293 Core mask: 0x1 00:07:06.293 00:07:06.293 Accel Perf Configuration: 00:07:06.293 Workload Type: fill 00:07:06.293 Fill pattern: 0x80 00:07:06.293 Transfer size: 4096 bytes 00:07:06.293 Vector count 1 00:07:06.293 Module: software 00:07:06.293 Queue depth: 64 00:07:06.293 Allocate depth: 64 00:07:06.293 # threads/core: 1 00:07:06.293 Run time: 1 seconds 00:07:06.293 Verify: Yes 00:07:06.293 00:07:06.293 Running for 1 seconds... 00:07:06.293 00:07:06.293 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:06.293 ------------------------------------------------------------------------------------ 00:07:06.293 0,0 574784/s 2245 MiB/s 0 0 00:07:06.293 ==================================================================================== 00:07:06.293 Total 574784/s 2245 MiB/s 0 0' 00:07:06.293 13:21:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.293 13:21:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.293 13:21:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:06.293 13:21:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.293 13:21:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:06.293 13:21:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.293 13:21:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.293 13:21:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.293 13:21:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.293 13:21:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.293 13:21:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.293 13:21:11 -- accel/accel.sh@42 -- # jq -r . 00:07:06.293 [2024-12-15 13:21:11.740244] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.293 [2024-12-15 13:21:11.740345] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70619 ] 00:07:06.293 [2024-12-15 13:21:11.877085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.293 [2024-12-15 13:21:11.925693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.293 13:21:11 -- accel/accel.sh@21 -- # val= 00:07:06.293 13:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.293 13:21:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.293 13:21:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.293 13:21:11 -- accel/accel.sh@21 -- # val= 00:07:06.293 13:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.293 13:21:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.293 13:21:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.293 13:21:11 -- accel/accel.sh@21 -- # val=0x1 00:07:06.293 13:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.293 13:21:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.293 13:21:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.293 13:21:11 -- accel/accel.sh@21 -- # val= 00:07:06.293 13:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.293 13:21:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.293 13:21:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.293 13:21:11 -- accel/accel.sh@21 -- # val= 00:07:06.293 13:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.293 13:21:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.294 13:21:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.294 13:21:11 -- accel/accel.sh@21 -- # val=fill 00:07:06.294 13:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.294 13:21:11 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:06.553 13:21:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.553 13:21:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.553 13:21:11 -- accel/accel.sh@21 -- # val=0x80 00:07:06.553 13:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.553 13:21:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.553 13:21:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.553 13:21:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:06.553 13:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.553 13:21:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.553 13:21:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.553 13:21:11 -- accel/accel.sh@21 -- # val= 00:07:06.553 13:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.553 13:21:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.553 13:21:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.553 13:21:11 -- accel/accel.sh@21 -- # val=software 00:07:06.553 13:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.553 13:21:11 -- accel/accel.sh@23 -- # accel_module=software 00:07:06.553 13:21:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.553 13:21:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.553 13:21:11 -- accel/accel.sh@21 -- # val=64 00:07:06.553 13:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.553 13:21:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.553 13:21:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.553 13:21:11 -- accel/accel.sh@21 -- # val=64 00:07:06.553 13:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.553 13:21:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.553 13:21:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.553 13:21:11 -- accel/accel.sh@21 -- # val=1 00:07:06.553 13:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.553 13:21:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.553 13:21:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.553 13:21:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:06.553 13:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.553 13:21:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.553 13:21:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.553 13:21:11 -- accel/accel.sh@21 -- # val=Yes 00:07:06.553 13:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.553 13:21:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.553 13:21:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.553 13:21:11 -- accel/accel.sh@21 -- # val= 00:07:06.553 13:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.553 13:21:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.553 13:21:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.553 13:21:11 -- accel/accel.sh@21 -- # val= 00:07:06.553 13:21:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.553 13:21:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.553 13:21:11 -- accel/accel.sh@20 -- # read -r var val 00:07:07.516 13:21:13 -- accel/accel.sh@21 -- # val= 00:07:07.516 13:21:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.516 13:21:13 -- accel/accel.sh@20 -- # IFS=: 00:07:07.516 13:21:13 -- accel/accel.sh@20 -- # read -r var val 00:07:07.516 13:21:13 -- accel/accel.sh@21 -- # val= 00:07:07.516 13:21:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.516 13:21:13 -- accel/accel.sh@20 -- # IFS=: 00:07:07.516 13:21:13 -- accel/accel.sh@20 -- # read -r var val 00:07:07.516 13:21:13 -- accel/accel.sh@21 -- # val= 00:07:07.516 13:21:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.516 13:21:13 -- accel/accel.sh@20 -- # IFS=: 00:07:07.516 13:21:13 -- accel/accel.sh@20 -- # read -r var val 00:07:07.516 13:21:13 -- accel/accel.sh@21 -- # val= 00:07:07.516 13:21:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.516 13:21:13 -- accel/accel.sh@20 -- # IFS=: 00:07:07.516 13:21:13 -- accel/accel.sh@20 -- # read -r var val 00:07:07.516 13:21:13 -- accel/accel.sh@21 -- # val= 00:07:07.516 13:21:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.516 13:21:13 -- accel/accel.sh@20 -- # IFS=: 00:07:07.516 13:21:13 -- accel/accel.sh@20 -- # read -r var val 00:07:07.516 13:21:13 -- accel/accel.sh@21 -- # val= 00:07:07.516 13:21:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.516 13:21:13 -- accel/accel.sh@20 -- # IFS=: 00:07:07.516 13:21:13 -- accel/accel.sh@20 -- # read -r var val 00:07:07.516 13:21:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:07.516 13:21:13 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:07.516 13:21:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.516 00:07:07.516 real 0m2.787s 00:07:07.516 user 0m2.384s 00:07:07.516 sys 0m0.204s 00:07:07.516 13:21:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:07.516 13:21:13 -- common/autotest_common.sh@10 -- # set +x 00:07:07.516 ************************************ 00:07:07.516 END TEST accel_fill 00:07:07.516 ************************************ 00:07:07.516 13:21:13 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:07.516 13:21:13 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:07.516 13:21:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.516 13:21:13 -- common/autotest_common.sh@10 -- # set +x 00:07:07.516 ************************************ 00:07:07.516 START TEST accel_copy_crc32c 00:07:07.516 ************************************ 00:07:07.516 13:21:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:07:07.516 13:21:13 -- accel/accel.sh@16 -- # local accel_opc 00:07:07.516 13:21:13 -- accel/accel.sh@17 -- # local accel_module 00:07:07.516 13:21:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:07.516 13:21:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:07.516 13:21:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.516 13:21:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.516 13:21:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.516 13:21:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.516 13:21:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.516 13:21:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.516 13:21:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.516 13:21:13 -- accel/accel.sh@42 -- # jq -r . 00:07:07.516 [2024-12-15 13:21:13.179309] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:07.516 [2024-12-15 13:21:13.179401] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70653 ] 00:07:07.775 [2024-12-15 13:21:13.316961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.775 [2024-12-15 13:21:13.372640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.150 13:21:14 -- accel/accel.sh@18 -- # out=' 00:07:09.150 SPDK Configuration: 00:07:09.150 Core mask: 0x1 00:07:09.150 00:07:09.150 Accel Perf Configuration: 00:07:09.150 Workload Type: copy_crc32c 00:07:09.150 CRC-32C seed: 0 00:07:09.150 Vector size: 4096 bytes 00:07:09.150 Transfer size: 4096 bytes 00:07:09.150 Vector count 1 00:07:09.150 Module: software 00:07:09.150 Queue depth: 32 00:07:09.150 Allocate depth: 32 00:07:09.150 # threads/core: 1 00:07:09.150 Run time: 1 seconds 00:07:09.150 Verify: Yes 00:07:09.150 00:07:09.150 Running for 1 seconds... 00:07:09.150 00:07:09.150 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:09.150 ------------------------------------------------------------------------------------ 00:07:09.150 0,0 308096/s 1203 MiB/s 0 0 00:07:09.150 ==================================================================================== 00:07:09.150 Total 308096/s 1203 MiB/s 0 0' 00:07:09.150 13:21:14 -- accel/accel.sh@20 -- # IFS=: 00:07:09.150 13:21:14 -- accel/accel.sh@20 -- # read -r var val 00:07:09.150 13:21:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:09.150 13:21:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.150 13:21:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:09.150 13:21:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.150 13:21:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.150 13:21:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.150 13:21:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.150 13:21:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.150 13:21:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.150 13:21:14 -- accel/accel.sh@42 -- # jq -r . 00:07:09.150 [2024-12-15 13:21:14.574181] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:09.150 [2024-12-15 13:21:14.574275] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70669 ] 00:07:09.150 [2024-12-15 13:21:14.712162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.150 [2024-12-15 13:21:14.759182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.150 13:21:14 -- accel/accel.sh@21 -- # val= 00:07:09.150 13:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.150 13:21:14 -- accel/accel.sh@20 -- # IFS=: 00:07:09.150 13:21:14 -- accel/accel.sh@20 -- # read -r var val 00:07:09.150 13:21:14 -- accel/accel.sh@21 -- # val= 00:07:09.150 13:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.150 13:21:14 -- accel/accel.sh@20 -- # IFS=: 00:07:09.150 13:21:14 -- accel/accel.sh@20 -- # read -r var val 00:07:09.150 13:21:14 -- accel/accel.sh@21 -- # val=0x1 00:07:09.150 13:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.150 13:21:14 -- accel/accel.sh@20 -- # IFS=: 00:07:09.150 13:21:14 -- accel/accel.sh@20 -- # read -r var val 00:07:09.150 13:21:14 -- accel/accel.sh@21 -- # val= 00:07:09.150 13:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.150 13:21:14 -- accel/accel.sh@20 -- # IFS=: 00:07:09.150 13:21:14 -- accel/accel.sh@20 -- # read -r var val 00:07:09.150 13:21:14 -- accel/accel.sh@21 -- # val= 00:07:09.150 13:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.150 13:21:14 -- accel/accel.sh@20 -- # IFS=: 00:07:09.150 13:21:14 -- accel/accel.sh@20 -- # read -r var val 00:07:09.150 13:21:14 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:09.150 13:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.150 13:21:14 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:09.151 13:21:14 -- accel/accel.sh@20 -- # IFS=: 00:07:09.151 13:21:14 -- accel/accel.sh@20 -- # read -r var val 00:07:09.151 13:21:14 -- accel/accel.sh@21 -- # val=0 00:07:09.151 13:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.151 13:21:14 -- accel/accel.sh@20 -- # IFS=: 00:07:09.151 13:21:14 -- accel/accel.sh@20 -- # read -r var val 00:07:09.151 13:21:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:09.151 13:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.151 13:21:14 -- accel/accel.sh@20 -- # IFS=: 00:07:09.151 13:21:14 -- accel/accel.sh@20 -- # read -r var val 00:07:09.151 13:21:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:09.151 13:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.151 13:21:14 -- accel/accel.sh@20 -- # IFS=: 00:07:09.151 13:21:14 -- accel/accel.sh@20 -- # read -r var val 00:07:09.151 13:21:14 -- accel/accel.sh@21 -- # val= 00:07:09.151 13:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.151 13:21:14 -- accel/accel.sh@20 -- # IFS=: 00:07:09.151 13:21:14 -- accel/accel.sh@20 -- # read -r var val 00:07:09.151 13:21:14 -- accel/accel.sh@21 -- # val=software 00:07:09.151 13:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.151 13:21:14 -- accel/accel.sh@23 -- # accel_module=software 00:07:09.151 13:21:14 -- accel/accel.sh@20 -- # IFS=: 00:07:09.151 13:21:14 -- accel/accel.sh@20 -- # read -r var val 00:07:09.151 13:21:14 -- accel/accel.sh@21 -- # val=32 00:07:09.151 13:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.151 13:21:14 -- accel/accel.sh@20 -- # IFS=: 00:07:09.151 13:21:14 -- accel/accel.sh@20 -- # read -r var val 00:07:09.151 13:21:14 -- accel/accel.sh@21 -- # val=32 00:07:09.151 13:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.151 13:21:14 -- accel/accel.sh@20 -- # IFS=: 00:07:09.151 13:21:14 -- accel/accel.sh@20 -- # read -r var val 00:07:09.151 13:21:14 -- accel/accel.sh@21 -- # val=1 00:07:09.151 13:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.151 13:21:14 -- accel/accel.sh@20 -- # IFS=: 00:07:09.151 13:21:14 -- accel/accel.sh@20 -- # read -r var val 00:07:09.151 13:21:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:09.151 13:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.151 13:21:14 -- accel/accel.sh@20 -- # IFS=: 00:07:09.151 13:21:14 -- accel/accel.sh@20 -- # read -r var val 00:07:09.151 13:21:14 -- accel/accel.sh@21 -- # val=Yes 00:07:09.151 13:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.151 13:21:14 -- accel/accel.sh@20 -- # IFS=: 00:07:09.151 13:21:14 -- accel/accel.sh@20 -- # read -r var val 00:07:09.151 13:21:14 -- accel/accel.sh@21 -- # val= 00:07:09.151 13:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.151 13:21:14 -- accel/accel.sh@20 -- # IFS=: 00:07:09.151 13:21:14 -- accel/accel.sh@20 -- # read -r var val 00:07:09.151 13:21:14 -- accel/accel.sh@21 -- # val= 00:07:09.151 13:21:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.151 13:21:14 -- accel/accel.sh@20 -- # IFS=: 00:07:09.151 13:21:14 -- accel/accel.sh@20 -- # read -r var val 00:07:10.527 13:21:15 -- accel/accel.sh@21 -- # val= 00:07:10.527 13:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.527 13:21:15 -- accel/accel.sh@20 -- # IFS=: 00:07:10.527 13:21:15 -- accel/accel.sh@20 -- # read -r var val 00:07:10.527 13:21:15 -- accel/accel.sh@21 -- # val= 00:07:10.527 13:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.527 13:21:15 -- accel/accel.sh@20 -- # IFS=: 00:07:10.527 13:21:15 -- accel/accel.sh@20 -- # read -r var val 00:07:10.527 13:21:15 -- accel/accel.sh@21 -- # val= 00:07:10.527 13:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.527 13:21:15 -- accel/accel.sh@20 -- # IFS=: 00:07:10.527 13:21:15 -- accel/accel.sh@20 -- # read -r var val 00:07:10.527 13:21:15 -- accel/accel.sh@21 -- # val= 00:07:10.527 13:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.527 13:21:15 -- accel/accel.sh@20 -- # IFS=: 00:07:10.527 13:21:15 -- accel/accel.sh@20 -- # read -r var val 00:07:10.527 13:21:15 -- accel/accel.sh@21 -- # val= 00:07:10.527 13:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.527 13:21:15 -- accel/accel.sh@20 -- # IFS=: 00:07:10.527 13:21:15 -- accel/accel.sh@20 -- # read -r var val 00:07:10.527 13:21:15 -- accel/accel.sh@21 -- # val= 00:07:10.527 13:21:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.527 13:21:15 -- accel/accel.sh@20 -- # IFS=: 00:07:10.527 13:21:15 -- accel/accel.sh@20 -- # read -r var val 00:07:10.527 13:21:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:10.527 13:21:15 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:10.527 13:21:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.527 00:07:10.527 real 0m2.802s 00:07:10.527 user 0m2.401s 00:07:10.527 sys 0m0.203s 00:07:10.527 13:21:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:10.527 ************************************ 00:07:10.527 END TEST accel_copy_crc32c 00:07:10.527 ************************************ 00:07:10.527 13:21:15 -- common/autotest_common.sh@10 -- # set +x 00:07:10.527 13:21:15 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:10.527 13:21:15 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:10.527 13:21:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.527 13:21:15 -- common/autotest_common.sh@10 -- # set +x 00:07:10.527 ************************************ 00:07:10.527 START TEST accel_copy_crc32c_C2 00:07:10.527 ************************************ 00:07:10.527 13:21:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:10.527 13:21:16 -- accel/accel.sh@16 -- # local accel_opc 00:07:10.527 13:21:16 -- accel/accel.sh@17 -- # local accel_module 00:07:10.527 13:21:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:10.527 13:21:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:10.527 13:21:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.527 13:21:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.527 13:21:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.527 13:21:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.527 13:21:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.527 13:21:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.527 13:21:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.527 13:21:16 -- accel/accel.sh@42 -- # jq -r . 00:07:10.527 [2024-12-15 13:21:16.029142] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:10.527 [2024-12-15 13:21:16.029703] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70709 ] 00:07:10.527 [2024-12-15 13:21:16.163151] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.527 [2024-12-15 13:21:16.209806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.908 13:21:17 -- accel/accel.sh@18 -- # out=' 00:07:11.908 SPDK Configuration: 00:07:11.908 Core mask: 0x1 00:07:11.908 00:07:11.908 Accel Perf Configuration: 00:07:11.908 Workload Type: copy_crc32c 00:07:11.908 CRC-32C seed: 0 00:07:11.908 Vector size: 4096 bytes 00:07:11.908 Transfer size: 8192 bytes 00:07:11.908 Vector count 2 00:07:11.908 Module: software 00:07:11.908 Queue depth: 32 00:07:11.908 Allocate depth: 32 00:07:11.908 # threads/core: 1 00:07:11.908 Run time: 1 seconds 00:07:11.908 Verify: Yes 00:07:11.908 00:07:11.908 Running for 1 seconds... 00:07:11.908 00:07:11.908 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:11.908 ------------------------------------------------------------------------------------ 00:07:11.908 0,0 221760/s 1732 MiB/s 0 0 00:07:11.908 ==================================================================================== 00:07:11.908 Total 221760/s 866 MiB/s 0 0' 00:07:11.908 13:21:17 -- accel/accel.sh@20 -- # IFS=: 00:07:11.908 13:21:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:11.908 13:21:17 -- accel/accel.sh@20 -- # read -r var val 00:07:11.908 13:21:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:11.908 13:21:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.908 13:21:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.908 13:21:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.908 13:21:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.908 13:21:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.908 13:21:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.908 13:21:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.908 13:21:17 -- accel/accel.sh@42 -- # jq -r . 00:07:11.908 [2024-12-15 13:21:17.412578] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:11.908 [2024-12-15 13:21:17.412692] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70723 ] 00:07:11.908 [2024-12-15 13:21:17.548322] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.908 [2024-12-15 13:21:17.594278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.166 13:21:17 -- accel/accel.sh@21 -- # val= 00:07:12.166 13:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # IFS=: 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # read -r var val 00:07:12.166 13:21:17 -- accel/accel.sh@21 -- # val= 00:07:12.166 13:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # IFS=: 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # read -r var val 00:07:12.166 13:21:17 -- accel/accel.sh@21 -- # val=0x1 00:07:12.166 13:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # IFS=: 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # read -r var val 00:07:12.166 13:21:17 -- accel/accel.sh@21 -- # val= 00:07:12.166 13:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # IFS=: 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # read -r var val 00:07:12.166 13:21:17 -- accel/accel.sh@21 -- # val= 00:07:12.166 13:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # IFS=: 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # read -r var val 00:07:12.166 13:21:17 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:12.166 13:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.166 13:21:17 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # IFS=: 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # read -r var val 00:07:12.166 13:21:17 -- accel/accel.sh@21 -- # val=0 00:07:12.166 13:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # IFS=: 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # read -r var val 00:07:12.166 13:21:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:12.166 13:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # IFS=: 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # read -r var val 00:07:12.166 13:21:17 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:12.166 13:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # IFS=: 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # read -r var val 00:07:12.166 13:21:17 -- accel/accel.sh@21 -- # val= 00:07:12.166 13:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # IFS=: 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # read -r var val 00:07:12.166 13:21:17 -- accel/accel.sh@21 -- # val=software 00:07:12.166 13:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.166 13:21:17 -- accel/accel.sh@23 -- # accel_module=software 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # IFS=: 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # read -r var val 00:07:12.166 13:21:17 -- accel/accel.sh@21 -- # val=32 00:07:12.166 13:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # IFS=: 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # read -r var val 00:07:12.166 13:21:17 -- accel/accel.sh@21 -- # val=32 00:07:12.166 13:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # IFS=: 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # read -r var val 00:07:12.166 13:21:17 -- accel/accel.sh@21 -- # val=1 00:07:12.166 13:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # IFS=: 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # read -r var val 00:07:12.166 13:21:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:12.166 13:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # IFS=: 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # read -r var val 00:07:12.166 13:21:17 -- accel/accel.sh@21 -- # val=Yes 00:07:12.166 13:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # IFS=: 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # read -r var val 00:07:12.166 13:21:17 -- accel/accel.sh@21 -- # val= 00:07:12.166 13:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # IFS=: 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # read -r var val 00:07:12.166 13:21:17 -- accel/accel.sh@21 -- # val= 00:07:12.166 13:21:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # IFS=: 00:07:12.166 13:21:17 -- accel/accel.sh@20 -- # read -r var val 00:07:13.103 13:21:18 -- accel/accel.sh@21 -- # val= 00:07:13.103 13:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.103 13:21:18 -- accel/accel.sh@20 -- # IFS=: 00:07:13.103 13:21:18 -- accel/accel.sh@20 -- # read -r var val 00:07:13.103 13:21:18 -- accel/accel.sh@21 -- # val= 00:07:13.103 13:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.103 13:21:18 -- accel/accel.sh@20 -- # IFS=: 00:07:13.103 13:21:18 -- accel/accel.sh@20 -- # read -r var val 00:07:13.103 13:21:18 -- accel/accel.sh@21 -- # val= 00:07:13.103 13:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.103 13:21:18 -- accel/accel.sh@20 -- # IFS=: 00:07:13.103 13:21:18 -- accel/accel.sh@20 -- # read -r var val 00:07:13.103 13:21:18 -- accel/accel.sh@21 -- # val= 00:07:13.103 13:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.103 13:21:18 -- accel/accel.sh@20 -- # IFS=: 00:07:13.103 13:21:18 -- accel/accel.sh@20 -- # read -r var val 00:07:13.103 13:21:18 -- accel/accel.sh@21 -- # val= 00:07:13.103 13:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.103 13:21:18 -- accel/accel.sh@20 -- # IFS=: 00:07:13.103 13:21:18 -- accel/accel.sh@20 -- # read -r var val 00:07:13.103 13:21:18 -- accel/accel.sh@21 -- # val= 00:07:13.103 13:21:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.103 13:21:18 -- accel/accel.sh@20 -- # IFS=: 00:07:13.103 13:21:18 -- accel/accel.sh@20 -- # read -r var val 00:07:13.103 13:21:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:13.103 13:21:18 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:13.103 13:21:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.103 00:07:13.103 real 0m2.777s 00:07:13.103 user 0m2.364s 00:07:13.103 sys 0m0.213s 00:07:13.103 13:21:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.103 13:21:18 -- common/autotest_common.sh@10 -- # set +x 00:07:13.103 ************************************ 00:07:13.103 END TEST accel_copy_crc32c_C2 00:07:13.103 ************************************ 00:07:13.362 13:21:18 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:13.362 13:21:18 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:13.362 13:21:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.362 13:21:18 -- common/autotest_common.sh@10 -- # set +x 00:07:13.362 ************************************ 00:07:13.362 START TEST accel_dualcast 00:07:13.362 ************************************ 00:07:13.362 13:21:18 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:07:13.362 13:21:18 -- accel/accel.sh@16 -- # local accel_opc 00:07:13.362 13:21:18 -- accel/accel.sh@17 -- # local accel_module 00:07:13.362 13:21:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:13.362 13:21:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:13.362 13:21:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.362 13:21:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.362 13:21:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.362 13:21:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.362 13:21:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.362 13:21:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.362 13:21:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.362 13:21:18 -- accel/accel.sh@42 -- # jq -r . 00:07:13.362 [2024-12-15 13:21:18.858418] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:13.362 [2024-12-15 13:21:18.858508] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70758 ] 00:07:13.362 [2024-12-15 13:21:18.996737] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.362 [2024-12-15 13:21:19.043100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.737 13:21:20 -- accel/accel.sh@18 -- # out=' 00:07:14.737 SPDK Configuration: 00:07:14.737 Core mask: 0x1 00:07:14.737 00:07:14.737 Accel Perf Configuration: 00:07:14.737 Workload Type: dualcast 00:07:14.737 Transfer size: 4096 bytes 00:07:14.737 Vector count 1 00:07:14.737 Module: software 00:07:14.737 Queue depth: 32 00:07:14.737 Allocate depth: 32 00:07:14.737 # threads/core: 1 00:07:14.737 Run time: 1 seconds 00:07:14.737 Verify: Yes 00:07:14.737 00:07:14.737 Running for 1 seconds... 00:07:14.737 00:07:14.737 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:14.737 ------------------------------------------------------------------------------------ 00:07:14.737 0,0 424992/s 1660 MiB/s 0 0 00:07:14.737 ==================================================================================== 00:07:14.737 Total 424992/s 1660 MiB/s 0 0' 00:07:14.737 13:21:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:14.737 13:21:20 -- accel/accel.sh@20 -- # IFS=: 00:07:14.737 13:21:20 -- accel/accel.sh@20 -- # read -r var val 00:07:14.737 13:21:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:14.737 13:21:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.737 13:21:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.737 13:21:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.737 13:21:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.737 13:21:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.737 13:21:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.737 13:21:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.737 13:21:20 -- accel/accel.sh@42 -- # jq -r . 00:07:14.737 [2024-12-15 13:21:20.252159] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:14.737 [2024-12-15 13:21:20.252251] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70777 ] 00:07:14.737 [2024-12-15 13:21:20.385615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.997 [2024-12-15 13:21:20.432507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.997 13:21:20 -- accel/accel.sh@21 -- # val= 00:07:14.997 13:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.997 13:21:20 -- accel/accel.sh@20 -- # IFS=: 00:07:14.997 13:21:20 -- accel/accel.sh@20 -- # read -r var val 00:07:14.997 13:21:20 -- accel/accel.sh@21 -- # val= 00:07:14.997 13:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.997 13:21:20 -- accel/accel.sh@20 -- # IFS=: 00:07:14.997 13:21:20 -- accel/accel.sh@20 -- # read -r var val 00:07:14.997 13:21:20 -- accel/accel.sh@21 -- # val=0x1 00:07:14.997 13:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.997 13:21:20 -- accel/accel.sh@20 -- # IFS=: 00:07:14.997 13:21:20 -- accel/accel.sh@20 -- # read -r var val 00:07:14.997 13:21:20 -- accel/accel.sh@21 -- # val= 00:07:14.997 13:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.997 13:21:20 -- accel/accel.sh@20 -- # IFS=: 00:07:14.997 13:21:20 -- accel/accel.sh@20 -- # read -r var val 00:07:14.997 13:21:20 -- accel/accel.sh@21 -- # val= 00:07:14.997 13:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.997 13:21:20 -- accel/accel.sh@20 -- # IFS=: 00:07:14.997 13:21:20 -- accel/accel.sh@20 -- # read -r var val 00:07:14.997 13:21:20 -- accel/accel.sh@21 -- # val=dualcast 00:07:14.997 13:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.997 13:21:20 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:14.997 13:21:20 -- accel/accel.sh@20 -- # IFS=: 00:07:14.997 13:21:20 -- accel/accel.sh@20 -- # read -r var val 00:07:14.997 13:21:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:14.997 13:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.997 13:21:20 -- accel/accel.sh@20 -- # IFS=: 00:07:14.997 13:21:20 -- accel/accel.sh@20 -- # read -r var val 00:07:14.997 13:21:20 -- accel/accel.sh@21 -- # val= 00:07:14.997 13:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.997 13:21:20 -- accel/accel.sh@20 -- # IFS=: 00:07:14.997 13:21:20 -- accel/accel.sh@20 -- # read -r var val 00:07:14.997 13:21:20 -- accel/accel.sh@21 -- # val=software 00:07:14.997 13:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.997 13:21:20 -- accel/accel.sh@23 -- # accel_module=software 00:07:14.997 13:21:20 -- accel/accel.sh@20 -- # IFS=: 00:07:14.997 13:21:20 -- accel/accel.sh@20 -- # read -r var val 00:07:14.997 13:21:20 -- accel/accel.sh@21 -- # val=32 00:07:14.997 13:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.997 13:21:20 -- accel/accel.sh@20 -- # IFS=: 00:07:14.997 13:21:20 -- accel/accel.sh@20 -- # read -r var val 00:07:14.997 13:21:20 -- accel/accel.sh@21 -- # val=32 00:07:14.997 13:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.997 13:21:20 -- accel/accel.sh@20 -- # IFS=: 00:07:14.997 13:21:20 -- accel/accel.sh@20 -- # read -r var val 00:07:14.997 13:21:20 -- accel/accel.sh@21 -- # val=1 00:07:14.997 13:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.997 13:21:20 -- accel/accel.sh@20 -- # IFS=: 00:07:14.997 13:21:20 -- accel/accel.sh@20 -- # read -r var val 00:07:14.997 13:21:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:14.997 13:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.997 13:21:20 -- accel/accel.sh@20 -- # IFS=: 00:07:14.997 13:21:20 -- accel/accel.sh@20 -- # read -r var val 00:07:14.997 13:21:20 -- accel/accel.sh@21 -- # val=Yes 00:07:14.997 13:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.997 13:21:20 -- accel/accel.sh@20 -- # IFS=: 00:07:14.997 13:21:20 -- accel/accel.sh@20 -- # read -r var val 00:07:14.997 13:21:20 -- accel/accel.sh@21 -- # val= 00:07:14.997 13:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.997 13:21:20 -- accel/accel.sh@20 -- # IFS=: 00:07:14.997 13:21:20 -- accel/accel.sh@20 -- # read -r var val 00:07:14.997 13:21:20 -- accel/accel.sh@21 -- # val= 00:07:14.997 13:21:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.997 13:21:20 -- accel/accel.sh@20 -- # IFS=: 00:07:14.997 13:21:20 -- accel/accel.sh@20 -- # read -r var val 00:07:16.374 13:21:21 -- accel/accel.sh@21 -- # val= 00:07:16.374 13:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.374 13:21:21 -- accel/accel.sh@20 -- # IFS=: 00:07:16.374 13:21:21 -- accel/accel.sh@20 -- # read -r var val 00:07:16.374 13:21:21 -- accel/accel.sh@21 -- # val= 00:07:16.374 13:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.374 13:21:21 -- accel/accel.sh@20 -- # IFS=: 00:07:16.374 13:21:21 -- accel/accel.sh@20 -- # read -r var val 00:07:16.374 13:21:21 -- accel/accel.sh@21 -- # val= 00:07:16.374 13:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.374 13:21:21 -- accel/accel.sh@20 -- # IFS=: 00:07:16.374 13:21:21 -- accel/accel.sh@20 -- # read -r var val 00:07:16.374 13:21:21 -- accel/accel.sh@21 -- # val= 00:07:16.374 13:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.374 13:21:21 -- accel/accel.sh@20 -- # IFS=: 00:07:16.374 13:21:21 -- accel/accel.sh@20 -- # read -r var val 00:07:16.374 13:21:21 -- accel/accel.sh@21 -- # val= 00:07:16.374 13:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.374 13:21:21 -- accel/accel.sh@20 -- # IFS=: 00:07:16.374 13:21:21 -- accel/accel.sh@20 -- # read -r var val 00:07:16.374 13:21:21 -- accel/accel.sh@21 -- # val= 00:07:16.374 13:21:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.374 13:21:21 -- accel/accel.sh@20 -- # IFS=: 00:07:16.374 13:21:21 -- accel/accel.sh@20 -- # read -r var val 00:07:16.374 13:21:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:16.374 13:21:21 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:16.374 13:21:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.374 ************************************ 00:07:16.374 END TEST accel_dualcast 00:07:16.374 ************************************ 00:07:16.374 00:07:16.374 real 0m2.799s 00:07:16.374 user 0m2.385s 00:07:16.374 sys 0m0.212s 00:07:16.374 13:21:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.374 13:21:21 -- common/autotest_common.sh@10 -- # set +x 00:07:16.374 13:21:21 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:16.374 13:21:21 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:16.374 13:21:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.374 13:21:21 -- common/autotest_common.sh@10 -- # set +x 00:07:16.374 ************************************ 00:07:16.374 START TEST accel_compare 00:07:16.374 ************************************ 00:07:16.374 13:21:21 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:07:16.374 13:21:21 -- accel/accel.sh@16 -- # local accel_opc 00:07:16.374 13:21:21 -- accel/accel.sh@17 -- # local accel_module 00:07:16.374 13:21:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:16.374 13:21:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:16.374 13:21:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.374 13:21:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.374 13:21:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.374 13:21:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.374 13:21:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.374 13:21:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.374 13:21:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.374 13:21:21 -- accel/accel.sh@42 -- # jq -r . 00:07:16.374 [2024-12-15 13:21:21.707077] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:16.374 [2024-12-15 13:21:21.707162] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70806 ] 00:07:16.374 [2024-12-15 13:21:21.837229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.374 [2024-12-15 13:21:21.883868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.752 13:21:23 -- accel/accel.sh@18 -- # out=' 00:07:17.752 SPDK Configuration: 00:07:17.752 Core mask: 0x1 00:07:17.752 00:07:17.752 Accel Perf Configuration: 00:07:17.752 Workload Type: compare 00:07:17.752 Transfer size: 4096 bytes 00:07:17.752 Vector count 1 00:07:17.752 Module: software 00:07:17.752 Queue depth: 32 00:07:17.752 Allocate depth: 32 00:07:17.752 # threads/core: 1 00:07:17.752 Run time: 1 seconds 00:07:17.752 Verify: Yes 00:07:17.752 00:07:17.752 Running for 1 seconds... 00:07:17.752 00:07:17.752 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:17.752 ------------------------------------------------------------------------------------ 00:07:17.752 0,0 565856/s 2210 MiB/s 0 0 00:07:17.752 ==================================================================================== 00:07:17.752 Total 565856/s 2210 MiB/s 0 0' 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.752 13:21:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:17.752 13:21:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:17.752 13:21:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.752 13:21:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.752 13:21:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.752 13:21:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.752 13:21:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.752 13:21:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.752 13:21:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.752 13:21:23 -- accel/accel.sh@42 -- # jq -r . 00:07:17.752 [2024-12-15 13:21:23.087232] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:17.752 [2024-12-15 13:21:23.087484] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70826 ] 00:07:17.752 [2024-12-15 13:21:23.220426] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.752 [2024-12-15 13:21:23.266741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.752 13:21:23 -- accel/accel.sh@21 -- # val= 00:07:17.752 13:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.752 13:21:23 -- accel/accel.sh@21 -- # val= 00:07:17.752 13:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.752 13:21:23 -- accel/accel.sh@21 -- # val=0x1 00:07:17.752 13:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.752 13:21:23 -- accel/accel.sh@21 -- # val= 00:07:17.752 13:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.752 13:21:23 -- accel/accel.sh@21 -- # val= 00:07:17.752 13:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.752 13:21:23 -- accel/accel.sh@21 -- # val=compare 00:07:17.752 13:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.752 13:21:23 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.752 13:21:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:17.752 13:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.752 13:21:23 -- accel/accel.sh@21 -- # val= 00:07:17.752 13:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.752 13:21:23 -- accel/accel.sh@21 -- # val=software 00:07:17.752 13:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.752 13:21:23 -- accel/accel.sh@23 -- # accel_module=software 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.752 13:21:23 -- accel/accel.sh@21 -- # val=32 00:07:17.752 13:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.752 13:21:23 -- accel/accel.sh@21 -- # val=32 00:07:17.752 13:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.752 13:21:23 -- accel/accel.sh@21 -- # val=1 00:07:17.752 13:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.752 13:21:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:17.752 13:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.752 13:21:23 -- accel/accel.sh@21 -- # val=Yes 00:07:17.752 13:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.752 13:21:23 -- accel/accel.sh@21 -- # val= 00:07:17.752 13:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.752 13:21:23 -- accel/accel.sh@21 -- # val= 00:07:17.752 13:21:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.752 13:21:23 -- accel/accel.sh@20 -- # read -r var val 00:07:19.128 13:21:24 -- accel/accel.sh@21 -- # val= 00:07:19.128 13:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.128 13:21:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.128 13:21:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.128 13:21:24 -- accel/accel.sh@21 -- # val= 00:07:19.128 13:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.128 13:21:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.128 13:21:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.128 13:21:24 -- accel/accel.sh@21 -- # val= 00:07:19.128 13:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.128 13:21:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.128 13:21:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.128 13:21:24 -- accel/accel.sh@21 -- # val= 00:07:19.128 13:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.128 13:21:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.128 13:21:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.128 13:21:24 -- accel/accel.sh@21 -- # val= 00:07:19.128 13:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.128 13:21:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.128 13:21:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.128 13:21:24 -- accel/accel.sh@21 -- # val= 00:07:19.128 13:21:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.128 13:21:24 -- accel/accel.sh@20 -- # IFS=: 00:07:19.128 13:21:24 -- accel/accel.sh@20 -- # read -r var val 00:07:19.128 13:21:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:19.128 13:21:24 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:19.128 13:21:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.128 00:07:19.128 real 0m2.773s 00:07:19.128 user 0m2.367s 00:07:19.128 sys 0m0.205s 00:07:19.128 13:21:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:19.128 13:21:24 -- common/autotest_common.sh@10 -- # set +x 00:07:19.128 ************************************ 00:07:19.128 END TEST accel_compare 00:07:19.128 ************************************ 00:07:19.128 13:21:24 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:19.128 13:21:24 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:19.128 13:21:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.128 13:21:24 -- common/autotest_common.sh@10 -- # set +x 00:07:19.128 ************************************ 00:07:19.128 START TEST accel_xor 00:07:19.128 ************************************ 00:07:19.128 13:21:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:07:19.128 13:21:24 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.128 13:21:24 -- accel/accel.sh@17 -- # local accel_module 00:07:19.128 13:21:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:19.128 13:21:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:19.128 13:21:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.128 13:21:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.128 13:21:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.128 13:21:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.128 13:21:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.128 13:21:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.128 13:21:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.129 13:21:24 -- accel/accel.sh@42 -- # jq -r . 00:07:19.129 [2024-12-15 13:21:24.533290] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:19.129 [2024-12-15 13:21:24.533549] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70860 ] 00:07:19.129 [2024-12-15 13:21:24.668488] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.129 [2024-12-15 13:21:24.715324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.534 13:21:25 -- accel/accel.sh@18 -- # out=' 00:07:20.534 SPDK Configuration: 00:07:20.534 Core mask: 0x1 00:07:20.534 00:07:20.534 Accel Perf Configuration: 00:07:20.534 Workload Type: xor 00:07:20.534 Source buffers: 2 00:07:20.534 Transfer size: 4096 bytes 00:07:20.534 Vector count 1 00:07:20.534 Module: software 00:07:20.534 Queue depth: 32 00:07:20.534 Allocate depth: 32 00:07:20.534 # threads/core: 1 00:07:20.534 Run time: 1 seconds 00:07:20.534 Verify: Yes 00:07:20.534 00:07:20.534 Running for 1 seconds... 00:07:20.534 00:07:20.534 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:20.534 ------------------------------------------------------------------------------------ 00:07:20.534 0,0 298944/s 1167 MiB/s 0 0 00:07:20.534 ==================================================================================== 00:07:20.534 Total 298944/s 1167 MiB/s 0 0' 00:07:20.534 13:21:25 -- accel/accel.sh@20 -- # IFS=: 00:07:20.534 13:21:25 -- accel/accel.sh@20 -- # read -r var val 00:07:20.534 13:21:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:20.534 13:21:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:20.534 13:21:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.534 13:21:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.534 13:21:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.534 13:21:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.534 13:21:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.534 13:21:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.534 13:21:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.534 13:21:25 -- accel/accel.sh@42 -- # jq -r . 00:07:20.534 [2024-12-15 13:21:25.931845] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:20.534 [2024-12-15 13:21:25.931913] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70880 ] 00:07:20.534 [2024-12-15 13:21:26.058965] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.534 [2024-12-15 13:21:26.106460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.534 13:21:26 -- accel/accel.sh@21 -- # val= 00:07:20.534 13:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.534 13:21:26 -- accel/accel.sh@20 -- # IFS=: 00:07:20.534 13:21:26 -- accel/accel.sh@20 -- # read -r var val 00:07:20.534 13:21:26 -- accel/accel.sh@21 -- # val= 00:07:20.534 13:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.534 13:21:26 -- accel/accel.sh@20 -- # IFS=: 00:07:20.534 13:21:26 -- accel/accel.sh@20 -- # read -r var val 00:07:20.534 13:21:26 -- accel/accel.sh@21 -- # val=0x1 00:07:20.534 13:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.534 13:21:26 -- accel/accel.sh@20 -- # IFS=: 00:07:20.534 13:21:26 -- accel/accel.sh@20 -- # read -r var val 00:07:20.534 13:21:26 -- accel/accel.sh@21 -- # val= 00:07:20.534 13:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.534 13:21:26 -- accel/accel.sh@20 -- # IFS=: 00:07:20.534 13:21:26 -- accel/accel.sh@20 -- # read -r var val 00:07:20.534 13:21:26 -- accel/accel.sh@21 -- # val= 00:07:20.534 13:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.534 13:21:26 -- accel/accel.sh@20 -- # IFS=: 00:07:20.534 13:21:26 -- accel/accel.sh@20 -- # read -r var val 00:07:20.534 13:21:26 -- accel/accel.sh@21 -- # val=xor 00:07:20.534 13:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.534 13:21:26 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:20.534 13:21:26 -- accel/accel.sh@20 -- # IFS=: 00:07:20.534 13:21:26 -- accel/accel.sh@20 -- # read -r var val 00:07:20.534 13:21:26 -- accel/accel.sh@21 -- # val=2 00:07:20.534 13:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.534 13:21:26 -- accel/accel.sh@20 -- # IFS=: 00:07:20.534 13:21:26 -- accel/accel.sh@20 -- # read -r var val 00:07:20.534 13:21:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:20.534 13:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.534 13:21:26 -- accel/accel.sh@20 -- # IFS=: 00:07:20.534 13:21:26 -- accel/accel.sh@20 -- # read -r var val 00:07:20.534 13:21:26 -- accel/accel.sh@21 -- # val= 00:07:20.534 13:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.534 13:21:26 -- accel/accel.sh@20 -- # IFS=: 00:07:20.534 13:21:26 -- accel/accel.sh@20 -- # read -r var val 00:07:20.534 13:21:26 -- accel/accel.sh@21 -- # val=software 00:07:20.534 13:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.534 13:21:26 -- accel/accel.sh@23 -- # accel_module=software 00:07:20.534 13:21:26 -- accel/accel.sh@20 -- # IFS=: 00:07:20.535 13:21:26 -- accel/accel.sh@20 -- # read -r var val 00:07:20.535 13:21:26 -- accel/accel.sh@21 -- # val=32 00:07:20.535 13:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.535 13:21:26 -- accel/accel.sh@20 -- # IFS=: 00:07:20.535 13:21:26 -- accel/accel.sh@20 -- # read -r var val 00:07:20.535 13:21:26 -- accel/accel.sh@21 -- # val=32 00:07:20.535 13:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.535 13:21:26 -- accel/accel.sh@20 -- # IFS=: 00:07:20.535 13:21:26 -- accel/accel.sh@20 -- # read -r var val 00:07:20.535 13:21:26 -- accel/accel.sh@21 -- # val=1 00:07:20.535 13:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.535 13:21:26 -- accel/accel.sh@20 -- # IFS=: 00:07:20.535 13:21:26 -- accel/accel.sh@20 -- # read -r var val 00:07:20.535 13:21:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:20.535 13:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.535 13:21:26 -- accel/accel.sh@20 -- # IFS=: 00:07:20.535 13:21:26 -- accel/accel.sh@20 -- # read -r var val 00:07:20.535 13:21:26 -- accel/accel.sh@21 -- # val=Yes 00:07:20.535 13:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.535 13:21:26 -- accel/accel.sh@20 -- # IFS=: 00:07:20.535 13:21:26 -- accel/accel.sh@20 -- # read -r var val 00:07:20.535 13:21:26 -- accel/accel.sh@21 -- # val= 00:07:20.535 13:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.535 13:21:26 -- accel/accel.sh@20 -- # IFS=: 00:07:20.535 13:21:26 -- accel/accel.sh@20 -- # read -r var val 00:07:20.535 13:21:26 -- accel/accel.sh@21 -- # val= 00:07:20.535 13:21:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.535 13:21:26 -- accel/accel.sh@20 -- # IFS=: 00:07:20.535 13:21:26 -- accel/accel.sh@20 -- # read -r var val 00:07:21.911 13:21:27 -- accel/accel.sh@21 -- # val= 00:07:21.911 13:21:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.911 13:21:27 -- accel/accel.sh@20 -- # IFS=: 00:07:21.911 13:21:27 -- accel/accel.sh@20 -- # read -r var val 00:07:21.911 13:21:27 -- accel/accel.sh@21 -- # val= 00:07:21.911 13:21:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.911 13:21:27 -- accel/accel.sh@20 -- # IFS=: 00:07:21.911 13:21:27 -- accel/accel.sh@20 -- # read -r var val 00:07:21.911 13:21:27 -- accel/accel.sh@21 -- # val= 00:07:21.911 13:21:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.911 13:21:27 -- accel/accel.sh@20 -- # IFS=: 00:07:21.911 13:21:27 -- accel/accel.sh@20 -- # read -r var val 00:07:21.911 13:21:27 -- accel/accel.sh@21 -- # val= 00:07:21.911 13:21:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.911 13:21:27 -- accel/accel.sh@20 -- # IFS=: 00:07:21.911 13:21:27 -- accel/accel.sh@20 -- # read -r var val 00:07:21.911 13:21:27 -- accel/accel.sh@21 -- # val= 00:07:21.911 13:21:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.911 13:21:27 -- accel/accel.sh@20 -- # IFS=: 00:07:21.911 13:21:27 -- accel/accel.sh@20 -- # read -r var val 00:07:21.911 13:21:27 -- accel/accel.sh@21 -- # val= 00:07:21.911 13:21:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.911 13:21:27 -- accel/accel.sh@20 -- # IFS=: 00:07:21.911 13:21:27 -- accel/accel.sh@20 -- # read -r var val 00:07:21.911 13:21:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:21.911 13:21:27 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:21.911 13:21:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.911 00:07:21.911 real 0m2.782s 00:07:21.911 user 0m2.384s 00:07:21.911 sys 0m0.199s 00:07:21.911 13:21:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:21.911 ************************************ 00:07:21.911 END TEST accel_xor 00:07:21.911 ************************************ 00:07:21.911 13:21:27 -- common/autotest_common.sh@10 -- # set +x 00:07:21.911 13:21:27 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:21.911 13:21:27 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:21.911 13:21:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:21.911 13:21:27 -- common/autotest_common.sh@10 -- # set +x 00:07:21.911 ************************************ 00:07:21.911 START TEST accel_xor 00:07:21.911 ************************************ 00:07:21.911 13:21:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:07:21.911 13:21:27 -- accel/accel.sh@16 -- # local accel_opc 00:07:21.911 13:21:27 -- accel/accel.sh@17 -- # local accel_module 00:07:21.911 13:21:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:21.911 13:21:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:21.911 13:21:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.911 13:21:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.911 13:21:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.911 13:21:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.911 13:21:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.911 13:21:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.911 13:21:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.911 13:21:27 -- accel/accel.sh@42 -- # jq -r . 00:07:21.911 [2024-12-15 13:21:27.366027] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:21.911 [2024-12-15 13:21:27.366119] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70909 ] 00:07:21.911 [2024-12-15 13:21:27.502368] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.911 [2024-12-15 13:21:27.561864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.290 13:21:28 -- accel/accel.sh@18 -- # out=' 00:07:23.290 SPDK Configuration: 00:07:23.290 Core mask: 0x1 00:07:23.290 00:07:23.290 Accel Perf Configuration: 00:07:23.290 Workload Type: xor 00:07:23.290 Source buffers: 3 00:07:23.290 Transfer size: 4096 bytes 00:07:23.290 Vector count 1 00:07:23.290 Module: software 00:07:23.290 Queue depth: 32 00:07:23.290 Allocate depth: 32 00:07:23.290 # threads/core: 1 00:07:23.290 Run time: 1 seconds 00:07:23.290 Verify: Yes 00:07:23.290 00:07:23.290 Running for 1 seconds... 00:07:23.290 00:07:23.290 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:23.290 ------------------------------------------------------------------------------------ 00:07:23.290 0,0 283648/s 1108 MiB/s 0 0 00:07:23.290 ==================================================================================== 00:07:23.290 Total 283648/s 1108 MiB/s 0 0' 00:07:23.290 13:21:28 -- accel/accel.sh@20 -- # IFS=: 00:07:23.290 13:21:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:23.290 13:21:28 -- accel/accel.sh@20 -- # read -r var val 00:07:23.290 13:21:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:23.290 13:21:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.290 13:21:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.290 13:21:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.290 13:21:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.290 13:21:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.290 13:21:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.290 13:21:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.290 13:21:28 -- accel/accel.sh@42 -- # jq -r . 00:07:23.290 [2024-12-15 13:21:28.783167] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:23.290 [2024-12-15 13:21:28.783261] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70928 ] 00:07:23.290 [2024-12-15 13:21:28.918871] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.290 [2024-12-15 13:21:28.966207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.549 13:21:29 -- accel/accel.sh@21 -- # val= 00:07:23.549 13:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # IFS=: 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # read -r var val 00:07:23.549 13:21:29 -- accel/accel.sh@21 -- # val= 00:07:23.549 13:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # IFS=: 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # read -r var val 00:07:23.549 13:21:29 -- accel/accel.sh@21 -- # val=0x1 00:07:23.549 13:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # IFS=: 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # read -r var val 00:07:23.549 13:21:29 -- accel/accel.sh@21 -- # val= 00:07:23.549 13:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # IFS=: 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # read -r var val 00:07:23.549 13:21:29 -- accel/accel.sh@21 -- # val= 00:07:23.549 13:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # IFS=: 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # read -r var val 00:07:23.549 13:21:29 -- accel/accel.sh@21 -- # val=xor 00:07:23.549 13:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.549 13:21:29 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # IFS=: 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # read -r var val 00:07:23.549 13:21:29 -- accel/accel.sh@21 -- # val=3 00:07:23.549 13:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # IFS=: 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # read -r var val 00:07:23.549 13:21:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:23.549 13:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # IFS=: 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # read -r var val 00:07:23.549 13:21:29 -- accel/accel.sh@21 -- # val= 00:07:23.549 13:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # IFS=: 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # read -r var val 00:07:23.549 13:21:29 -- accel/accel.sh@21 -- # val=software 00:07:23.549 13:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.549 13:21:29 -- accel/accel.sh@23 -- # accel_module=software 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # IFS=: 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # read -r var val 00:07:23.549 13:21:29 -- accel/accel.sh@21 -- # val=32 00:07:23.549 13:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # IFS=: 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # read -r var val 00:07:23.549 13:21:29 -- accel/accel.sh@21 -- # val=32 00:07:23.549 13:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # IFS=: 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # read -r var val 00:07:23.549 13:21:29 -- accel/accel.sh@21 -- # val=1 00:07:23.549 13:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # IFS=: 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # read -r var val 00:07:23.549 13:21:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:23.549 13:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # IFS=: 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # read -r var val 00:07:23.549 13:21:29 -- accel/accel.sh@21 -- # val=Yes 00:07:23.549 13:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # IFS=: 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # read -r var val 00:07:23.549 13:21:29 -- accel/accel.sh@21 -- # val= 00:07:23.549 13:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # IFS=: 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # read -r var val 00:07:23.549 13:21:29 -- accel/accel.sh@21 -- # val= 00:07:23.549 13:21:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # IFS=: 00:07:23.549 13:21:29 -- accel/accel.sh@20 -- # read -r var val 00:07:24.486 13:21:30 -- accel/accel.sh@21 -- # val= 00:07:24.486 13:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.486 13:21:30 -- accel/accel.sh@20 -- # IFS=: 00:07:24.486 13:21:30 -- accel/accel.sh@20 -- # read -r var val 00:07:24.486 13:21:30 -- accel/accel.sh@21 -- # val= 00:07:24.486 13:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.486 13:21:30 -- accel/accel.sh@20 -- # IFS=: 00:07:24.486 13:21:30 -- accel/accel.sh@20 -- # read -r var val 00:07:24.486 13:21:30 -- accel/accel.sh@21 -- # val= 00:07:24.486 13:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.486 13:21:30 -- accel/accel.sh@20 -- # IFS=: 00:07:24.486 13:21:30 -- accel/accel.sh@20 -- # read -r var val 00:07:24.486 13:21:30 -- accel/accel.sh@21 -- # val= 00:07:24.486 13:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.486 13:21:30 -- accel/accel.sh@20 -- # IFS=: 00:07:24.486 13:21:30 -- accel/accel.sh@20 -- # read -r var val 00:07:24.486 13:21:30 -- accel/accel.sh@21 -- # val= 00:07:24.486 13:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.486 13:21:30 -- accel/accel.sh@20 -- # IFS=: 00:07:24.486 13:21:30 -- accel/accel.sh@20 -- # read -r var val 00:07:24.486 13:21:30 -- accel/accel.sh@21 -- # val= 00:07:24.486 ************************************ 00:07:24.486 END TEST accel_xor 00:07:24.486 ************************************ 00:07:24.486 13:21:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.486 13:21:30 -- accel/accel.sh@20 -- # IFS=: 00:07:24.486 13:21:30 -- accel/accel.sh@20 -- # read -r var val 00:07:24.486 13:21:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:24.486 13:21:30 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:24.486 13:21:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.486 00:07:24.486 real 0m2.811s 00:07:24.486 user 0m2.404s 00:07:24.486 sys 0m0.210s 00:07:24.486 13:21:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:24.486 13:21:30 -- common/autotest_common.sh@10 -- # set +x 00:07:24.745 13:21:30 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:24.745 13:21:30 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:24.745 13:21:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:24.745 13:21:30 -- common/autotest_common.sh@10 -- # set +x 00:07:24.745 ************************************ 00:07:24.745 START TEST accel_dif_verify 00:07:24.745 ************************************ 00:07:24.745 13:21:30 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:07:24.745 13:21:30 -- accel/accel.sh@16 -- # local accel_opc 00:07:24.745 13:21:30 -- accel/accel.sh@17 -- # local accel_module 00:07:24.745 13:21:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:24.745 13:21:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:24.745 13:21:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.745 13:21:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.745 13:21:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.745 13:21:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.745 13:21:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.745 13:21:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.745 13:21:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.745 13:21:30 -- accel/accel.sh@42 -- # jq -r . 00:07:24.745 [2024-12-15 13:21:30.225312] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:24.745 [2024-12-15 13:21:30.225396] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70963 ] 00:07:24.745 [2024-12-15 13:21:30.351829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.745 [2024-12-15 13:21:30.398565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.122 13:21:31 -- accel/accel.sh@18 -- # out=' 00:07:26.122 SPDK Configuration: 00:07:26.122 Core mask: 0x1 00:07:26.122 00:07:26.122 Accel Perf Configuration: 00:07:26.122 Workload Type: dif_verify 00:07:26.122 Vector size: 4096 bytes 00:07:26.122 Transfer size: 4096 bytes 00:07:26.122 Block size: 512 bytes 00:07:26.122 Metadata size: 8 bytes 00:07:26.122 Vector count 1 00:07:26.122 Module: software 00:07:26.122 Queue depth: 32 00:07:26.122 Allocate depth: 32 00:07:26.122 # threads/core: 1 00:07:26.122 Run time: 1 seconds 00:07:26.122 Verify: No 00:07:26.122 00:07:26.122 Running for 1 seconds... 00:07:26.122 00:07:26.122 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:26.122 ------------------------------------------------------------------------------------ 00:07:26.122 0,0 125696/s 498 MiB/s 0 0 00:07:26.122 ==================================================================================== 00:07:26.122 Total 125696/s 491 MiB/s 0 0' 00:07:26.122 13:21:31 -- accel/accel.sh@20 -- # IFS=: 00:07:26.122 13:21:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:26.122 13:21:31 -- accel/accel.sh@20 -- # read -r var val 00:07:26.122 13:21:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:26.122 13:21:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.122 13:21:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.122 13:21:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.122 13:21:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.122 13:21:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.122 13:21:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.122 13:21:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.122 13:21:31 -- accel/accel.sh@42 -- # jq -r . 00:07:26.122 [2024-12-15 13:21:31.604303] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:26.122 [2024-12-15 13:21:31.604399] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70977 ] 00:07:26.122 [2024-12-15 13:21:31.738357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.122 [2024-12-15 13:21:31.790161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.381 13:21:31 -- accel/accel.sh@21 -- # val= 00:07:26.381 13:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # IFS=: 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # read -r var val 00:07:26.381 13:21:31 -- accel/accel.sh@21 -- # val= 00:07:26.381 13:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # IFS=: 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # read -r var val 00:07:26.381 13:21:31 -- accel/accel.sh@21 -- # val=0x1 00:07:26.381 13:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # IFS=: 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # read -r var val 00:07:26.381 13:21:31 -- accel/accel.sh@21 -- # val= 00:07:26.381 13:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # IFS=: 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # read -r var val 00:07:26.381 13:21:31 -- accel/accel.sh@21 -- # val= 00:07:26.381 13:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # IFS=: 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # read -r var val 00:07:26.381 13:21:31 -- accel/accel.sh@21 -- # val=dif_verify 00:07:26.381 13:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.381 13:21:31 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # IFS=: 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # read -r var val 00:07:26.381 13:21:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:26.381 13:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # IFS=: 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # read -r var val 00:07:26.381 13:21:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:26.381 13:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # IFS=: 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # read -r var val 00:07:26.381 13:21:31 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:26.381 13:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # IFS=: 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # read -r var val 00:07:26.381 13:21:31 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:26.381 13:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # IFS=: 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # read -r var val 00:07:26.381 13:21:31 -- accel/accel.sh@21 -- # val= 00:07:26.381 13:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # IFS=: 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # read -r var val 00:07:26.381 13:21:31 -- accel/accel.sh@21 -- # val=software 00:07:26.381 13:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.381 13:21:31 -- accel/accel.sh@23 -- # accel_module=software 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # IFS=: 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # read -r var val 00:07:26.381 13:21:31 -- accel/accel.sh@21 -- # val=32 00:07:26.381 13:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # IFS=: 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # read -r var val 00:07:26.381 13:21:31 -- accel/accel.sh@21 -- # val=32 00:07:26.381 13:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # IFS=: 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # read -r var val 00:07:26.381 13:21:31 -- accel/accel.sh@21 -- # val=1 00:07:26.381 13:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # IFS=: 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # read -r var val 00:07:26.381 13:21:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:26.381 13:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # IFS=: 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # read -r var val 00:07:26.381 13:21:31 -- accel/accel.sh@21 -- # val=No 00:07:26.381 13:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # IFS=: 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # read -r var val 00:07:26.381 13:21:31 -- accel/accel.sh@21 -- # val= 00:07:26.381 13:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # IFS=: 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # read -r var val 00:07:26.381 13:21:31 -- accel/accel.sh@21 -- # val= 00:07:26.381 13:21:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # IFS=: 00:07:26.381 13:21:31 -- accel/accel.sh@20 -- # read -r var val 00:07:27.318 13:21:32 -- accel/accel.sh@21 -- # val= 00:07:27.318 13:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.318 13:21:32 -- accel/accel.sh@20 -- # IFS=: 00:07:27.318 13:21:32 -- accel/accel.sh@20 -- # read -r var val 00:07:27.318 13:21:32 -- accel/accel.sh@21 -- # val= 00:07:27.318 13:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.318 13:21:32 -- accel/accel.sh@20 -- # IFS=: 00:07:27.318 13:21:32 -- accel/accel.sh@20 -- # read -r var val 00:07:27.318 13:21:32 -- accel/accel.sh@21 -- # val= 00:07:27.318 13:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.318 13:21:32 -- accel/accel.sh@20 -- # IFS=: 00:07:27.318 13:21:32 -- accel/accel.sh@20 -- # read -r var val 00:07:27.318 13:21:32 -- accel/accel.sh@21 -- # val= 00:07:27.318 13:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.318 13:21:32 -- accel/accel.sh@20 -- # IFS=: 00:07:27.318 13:21:32 -- accel/accel.sh@20 -- # read -r var val 00:07:27.318 13:21:32 -- accel/accel.sh@21 -- # val= 00:07:27.318 13:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.318 13:21:32 -- accel/accel.sh@20 -- # IFS=: 00:07:27.318 13:21:32 -- accel/accel.sh@20 -- # read -r var val 00:07:27.318 13:21:32 -- accel/accel.sh@21 -- # val= 00:07:27.318 13:21:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.318 13:21:32 -- accel/accel.sh@20 -- # IFS=: 00:07:27.318 13:21:32 -- accel/accel.sh@20 -- # read -r var val 00:07:27.318 13:21:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:27.318 13:21:32 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:27.318 13:21:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.318 00:07:27.318 real 0m2.793s 00:07:27.318 user 0m2.394s 00:07:27.318 sys 0m0.203s 00:07:27.318 13:21:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:27.318 13:21:32 -- common/autotest_common.sh@10 -- # set +x 00:07:27.318 ************************************ 00:07:27.318 END TEST accel_dif_verify 00:07:27.318 ************************************ 00:07:27.577 13:21:33 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:27.577 13:21:33 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:27.577 13:21:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.577 13:21:33 -- common/autotest_common.sh@10 -- # set +x 00:07:27.577 ************************************ 00:07:27.577 START TEST accel_dif_generate 00:07:27.577 ************************************ 00:07:27.577 13:21:33 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:07:27.577 13:21:33 -- accel/accel.sh@16 -- # local accel_opc 00:07:27.577 13:21:33 -- accel/accel.sh@17 -- # local accel_module 00:07:27.577 13:21:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:27.577 13:21:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:27.577 13:21:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.577 13:21:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.577 13:21:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.577 13:21:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.577 13:21:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.577 13:21:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.577 13:21:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.577 13:21:33 -- accel/accel.sh@42 -- # jq -r . 00:07:27.577 [2024-12-15 13:21:33.072761] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:27.577 [2024-12-15 13:21:33.072908] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71017 ] 00:07:27.577 [2024-12-15 13:21:33.206029] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.577 [2024-12-15 13:21:33.256988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.953 13:21:34 -- accel/accel.sh@18 -- # out=' 00:07:28.953 SPDK Configuration: 00:07:28.953 Core mask: 0x1 00:07:28.953 00:07:28.953 Accel Perf Configuration: 00:07:28.953 Workload Type: dif_generate 00:07:28.953 Vector size: 4096 bytes 00:07:28.953 Transfer size: 4096 bytes 00:07:28.953 Block size: 512 bytes 00:07:28.953 Metadata size: 8 bytes 00:07:28.953 Vector count 1 00:07:28.953 Module: software 00:07:28.953 Queue depth: 32 00:07:28.953 Allocate depth: 32 00:07:28.953 # threads/core: 1 00:07:28.953 Run time: 1 seconds 00:07:28.953 Verify: No 00:07:28.953 00:07:28.953 Running for 1 seconds... 00:07:28.953 00:07:28.953 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:28.953 ------------------------------------------------------------------------------------ 00:07:28.953 0,0 152672/s 605 MiB/s 0 0 00:07:28.953 ==================================================================================== 00:07:28.953 Total 152672/s 596 MiB/s 0 0' 00:07:28.953 13:21:34 -- accel/accel.sh@20 -- # IFS=: 00:07:28.953 13:21:34 -- accel/accel.sh@20 -- # read -r var val 00:07:28.953 13:21:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:28.953 13:21:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:28.953 13:21:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.953 13:21:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.953 13:21:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.953 13:21:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.953 13:21:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.953 13:21:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.953 13:21:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.953 13:21:34 -- accel/accel.sh@42 -- # jq -r . 00:07:28.953 [2024-12-15 13:21:34.478656] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:28.953 [2024-12-15 13:21:34.478754] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71031 ] 00:07:28.953 [2024-12-15 13:21:34.614630] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.213 [2024-12-15 13:21:34.666283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.213 13:21:34 -- accel/accel.sh@21 -- # val= 00:07:29.213 13:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.213 13:21:34 -- accel/accel.sh@21 -- # val= 00:07:29.213 13:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.213 13:21:34 -- accel/accel.sh@21 -- # val=0x1 00:07:29.213 13:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.213 13:21:34 -- accel/accel.sh@21 -- # val= 00:07:29.213 13:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.213 13:21:34 -- accel/accel.sh@21 -- # val= 00:07:29.213 13:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.213 13:21:34 -- accel/accel.sh@21 -- # val=dif_generate 00:07:29.213 13:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.213 13:21:34 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.213 13:21:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:29.213 13:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.213 13:21:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:29.213 13:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.213 13:21:34 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:29.213 13:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.213 13:21:34 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:29.213 13:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.213 13:21:34 -- accel/accel.sh@21 -- # val= 00:07:29.213 13:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.213 13:21:34 -- accel/accel.sh@21 -- # val=software 00:07:29.213 13:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.213 13:21:34 -- accel/accel.sh@23 -- # accel_module=software 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.213 13:21:34 -- accel/accel.sh@21 -- # val=32 00:07:29.213 13:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.213 13:21:34 -- accel/accel.sh@21 -- # val=32 00:07:29.213 13:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.213 13:21:34 -- accel/accel.sh@21 -- # val=1 00:07:29.213 13:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.213 13:21:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:29.213 13:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.213 13:21:34 -- accel/accel.sh@21 -- # val=No 00:07:29.213 13:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.213 13:21:34 -- accel/accel.sh@21 -- # val= 00:07:29.213 13:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.213 13:21:34 -- accel/accel.sh@21 -- # val= 00:07:29.213 13:21:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.213 13:21:34 -- accel/accel.sh@20 -- # read -r var val 00:07:30.589 13:21:35 -- accel/accel.sh@21 -- # val= 00:07:30.589 13:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.589 13:21:35 -- accel/accel.sh@20 -- # IFS=: 00:07:30.589 13:21:35 -- accel/accel.sh@20 -- # read -r var val 00:07:30.589 13:21:35 -- accel/accel.sh@21 -- # val= 00:07:30.589 13:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.589 13:21:35 -- accel/accel.sh@20 -- # IFS=: 00:07:30.589 13:21:35 -- accel/accel.sh@20 -- # read -r var val 00:07:30.589 13:21:35 -- accel/accel.sh@21 -- # val= 00:07:30.589 13:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.589 13:21:35 -- accel/accel.sh@20 -- # IFS=: 00:07:30.589 13:21:35 -- accel/accel.sh@20 -- # read -r var val 00:07:30.589 13:21:35 -- accel/accel.sh@21 -- # val= 00:07:30.589 13:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.589 13:21:35 -- accel/accel.sh@20 -- # IFS=: 00:07:30.589 13:21:35 -- accel/accel.sh@20 -- # read -r var val 00:07:30.589 13:21:35 -- accel/accel.sh@21 -- # val= 00:07:30.589 13:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.589 13:21:35 -- accel/accel.sh@20 -- # IFS=: 00:07:30.589 13:21:35 -- accel/accel.sh@20 -- # read -r var val 00:07:30.589 13:21:35 -- accel/accel.sh@21 -- # val= 00:07:30.589 13:21:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.589 13:21:35 -- accel/accel.sh@20 -- # IFS=: 00:07:30.589 13:21:35 -- accel/accel.sh@20 -- # read -r var val 00:07:30.589 13:21:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:30.589 13:21:35 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:30.589 13:21:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.589 00:07:30.589 real 0m2.802s 00:07:30.589 user 0m2.388s 00:07:30.589 sys 0m0.217s 00:07:30.589 13:21:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:30.589 ************************************ 00:07:30.589 END TEST accel_dif_generate 00:07:30.589 ************************************ 00:07:30.589 13:21:35 -- common/autotest_common.sh@10 -- # set +x 00:07:30.589 13:21:35 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:30.589 13:21:35 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:30.589 13:21:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:30.589 13:21:35 -- common/autotest_common.sh@10 -- # set +x 00:07:30.589 ************************************ 00:07:30.589 START TEST accel_dif_generate_copy 00:07:30.589 ************************************ 00:07:30.589 13:21:35 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:07:30.589 13:21:35 -- accel/accel.sh@16 -- # local accel_opc 00:07:30.589 13:21:35 -- accel/accel.sh@17 -- # local accel_module 00:07:30.589 13:21:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:30.589 13:21:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:30.589 13:21:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.589 13:21:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.589 13:21:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.590 13:21:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.590 13:21:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.590 13:21:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.590 13:21:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.590 13:21:35 -- accel/accel.sh@42 -- # jq -r . 00:07:30.590 [2024-12-15 13:21:35.923421] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:30.590 [2024-12-15 13:21:35.923693] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71060 ] 00:07:30.590 [2024-12-15 13:21:36.056158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.590 [2024-12-15 13:21:36.103710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.979 13:21:37 -- accel/accel.sh@18 -- # out=' 00:07:31.979 SPDK Configuration: 00:07:31.979 Core mask: 0x1 00:07:31.979 00:07:31.979 Accel Perf Configuration: 00:07:31.979 Workload Type: dif_generate_copy 00:07:31.979 Vector size: 4096 bytes 00:07:31.979 Transfer size: 4096 bytes 00:07:31.979 Vector count 1 00:07:31.979 Module: software 00:07:31.979 Queue depth: 32 00:07:31.979 Allocate depth: 32 00:07:31.979 # threads/core: 1 00:07:31.979 Run time: 1 seconds 00:07:31.979 Verify: No 00:07:31.979 00:07:31.979 Running for 1 seconds... 00:07:31.979 00:07:31.979 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:31.979 ------------------------------------------------------------------------------------ 00:07:31.979 0,0 117440/s 465 MiB/s 0 0 00:07:31.979 ==================================================================================== 00:07:31.979 Total 117440/s 458 MiB/s 0 0' 00:07:31.979 13:21:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.979 13:21:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:31.979 13:21:37 -- accel/accel.sh@20 -- # read -r var val 00:07:31.979 13:21:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:31.979 13:21:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.979 13:21:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.979 13:21:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.979 13:21:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.979 13:21:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.979 13:21:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.979 13:21:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.979 13:21:37 -- accel/accel.sh@42 -- # jq -r . 00:07:31.979 [2024-12-15 13:21:37.314548] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:31.979 [2024-12-15 13:21:37.314842] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71085 ] 00:07:31.979 [2024-12-15 13:21:37.443056] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.979 [2024-12-15 13:21:37.489687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.979 13:21:37 -- accel/accel.sh@21 -- # val= 00:07:31.979 13:21:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.979 13:21:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.979 13:21:37 -- accel/accel.sh@20 -- # read -r var val 00:07:31.979 13:21:37 -- accel/accel.sh@21 -- # val= 00:07:31.979 13:21:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.979 13:21:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.979 13:21:37 -- accel/accel.sh@20 -- # read -r var val 00:07:31.979 13:21:37 -- accel/accel.sh@21 -- # val=0x1 00:07:31.979 13:21:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.979 13:21:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.979 13:21:37 -- accel/accel.sh@20 -- # read -r var val 00:07:31.979 13:21:37 -- accel/accel.sh@21 -- # val= 00:07:31.979 13:21:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.979 13:21:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.979 13:21:37 -- accel/accel.sh@20 -- # read -r var val 00:07:31.979 13:21:37 -- accel/accel.sh@21 -- # val= 00:07:31.979 13:21:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.979 13:21:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.979 13:21:37 -- accel/accel.sh@20 -- # read -r var val 00:07:31.979 13:21:37 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:31.979 13:21:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.979 13:21:37 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:31.979 13:21:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.979 13:21:37 -- accel/accel.sh@20 -- # read -r var val 00:07:31.979 13:21:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:31.979 13:21:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.979 13:21:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.979 13:21:37 -- accel/accel.sh@20 -- # read -r var val 00:07:31.979 13:21:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:31.979 13:21:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.979 13:21:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.979 13:21:37 -- accel/accel.sh@20 -- # read -r var val 00:07:31.979 13:21:37 -- accel/accel.sh@21 -- # val= 00:07:31.979 13:21:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.979 13:21:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.979 13:21:37 -- accel/accel.sh@20 -- # read -r var val 00:07:31.979 13:21:37 -- accel/accel.sh@21 -- # val=software 00:07:31.979 13:21:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.979 13:21:37 -- accel/accel.sh@23 -- # accel_module=software 00:07:31.979 13:21:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.979 13:21:37 -- accel/accel.sh@20 -- # read -r var val 00:07:31.979 13:21:37 -- accel/accel.sh@21 -- # val=32 00:07:31.979 13:21:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.979 13:21:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.979 13:21:37 -- accel/accel.sh@20 -- # read -r var val 00:07:31.979 13:21:37 -- accel/accel.sh@21 -- # val=32 00:07:31.979 13:21:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.979 13:21:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.979 13:21:37 -- accel/accel.sh@20 -- # read -r var val 00:07:31.979 13:21:37 -- accel/accel.sh@21 -- # val=1 00:07:31.979 13:21:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.979 13:21:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.979 13:21:37 -- accel/accel.sh@20 -- # read -r var val 00:07:31.979 13:21:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:31.979 13:21:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.979 13:21:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.979 13:21:37 -- accel/accel.sh@20 -- # read -r var val 00:07:31.979 13:21:37 -- accel/accel.sh@21 -- # val=No 00:07:31.979 13:21:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.979 13:21:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.979 13:21:37 -- accel/accel.sh@20 -- # read -r var val 00:07:31.979 13:21:37 -- accel/accel.sh@21 -- # val= 00:07:31.979 13:21:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.980 13:21:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.980 13:21:37 -- accel/accel.sh@20 -- # read -r var val 00:07:31.980 13:21:37 -- accel/accel.sh@21 -- # val= 00:07:31.980 13:21:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.980 13:21:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.980 13:21:37 -- accel/accel.sh@20 -- # read -r var val 00:07:33.366 13:21:38 -- accel/accel.sh@21 -- # val= 00:07:33.366 13:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.366 13:21:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.366 13:21:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.366 13:21:38 -- accel/accel.sh@21 -- # val= 00:07:33.366 13:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.366 13:21:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.366 13:21:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.366 13:21:38 -- accel/accel.sh@21 -- # val= 00:07:33.366 13:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.366 13:21:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.366 13:21:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.366 13:21:38 -- accel/accel.sh@21 -- # val= 00:07:33.366 13:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.366 13:21:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.366 13:21:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.366 13:21:38 -- accel/accel.sh@21 -- # val= 00:07:33.366 13:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.366 13:21:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.366 13:21:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.366 13:21:38 -- accel/accel.sh@21 -- # val= 00:07:33.366 13:21:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.366 13:21:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.366 13:21:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.366 13:21:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:33.366 13:21:38 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:33.366 13:21:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.366 00:07:33.366 real 0m2.775s 00:07:33.366 user 0m2.374s 00:07:33.366 sys 0m0.201s 00:07:33.366 13:21:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:33.366 13:21:38 -- common/autotest_common.sh@10 -- # set +x 00:07:33.366 ************************************ 00:07:33.366 END TEST accel_dif_generate_copy 00:07:33.366 ************************************ 00:07:33.366 13:21:38 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:33.366 13:21:38 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:33.366 13:21:38 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:33.366 13:21:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.366 13:21:38 -- common/autotest_common.sh@10 -- # set +x 00:07:33.366 ************************************ 00:07:33.366 START TEST accel_comp 00:07:33.366 ************************************ 00:07:33.366 13:21:38 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:33.366 13:21:38 -- accel/accel.sh@16 -- # local accel_opc 00:07:33.366 13:21:38 -- accel/accel.sh@17 -- # local accel_module 00:07:33.366 13:21:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:33.366 13:21:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:33.366 13:21:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.366 13:21:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.366 13:21:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.366 13:21:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.366 13:21:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.366 13:21:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.366 13:21:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.366 13:21:38 -- accel/accel.sh@42 -- # jq -r . 00:07:33.366 [2024-12-15 13:21:38.751470] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:33.366 [2024-12-15 13:21:38.751723] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71114 ] 00:07:33.366 [2024-12-15 13:21:38.881895] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.366 [2024-12-15 13:21:38.928161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.743 13:21:40 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:34.743 00:07:34.743 SPDK Configuration: 00:07:34.743 Core mask: 0x1 00:07:34.743 00:07:34.743 Accel Perf Configuration: 00:07:34.743 Workload Type: compress 00:07:34.743 Transfer size: 4096 bytes 00:07:34.743 Vector count 1 00:07:34.743 Module: software 00:07:34.743 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:34.743 Queue depth: 32 00:07:34.743 Allocate depth: 32 00:07:34.743 # threads/core: 1 00:07:34.743 Run time: 1 seconds 00:07:34.743 Verify: No 00:07:34.743 00:07:34.743 Running for 1 seconds... 00:07:34.743 00:07:34.743 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:34.743 ------------------------------------------------------------------------------------ 00:07:34.743 0,0 59680/s 248 MiB/s 0 0 00:07:34.743 ==================================================================================== 00:07:34.743 Total 59680/s 233 MiB/s 0 0' 00:07:34.743 13:21:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:34.743 13:21:40 -- accel/accel.sh@20 -- # IFS=: 00:07:34.743 13:21:40 -- accel/accel.sh@20 -- # read -r var val 00:07:34.743 13:21:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.743 13:21:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:34.743 13:21:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.743 13:21:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.743 13:21:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.743 13:21:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.743 13:21:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.743 13:21:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.743 13:21:40 -- accel/accel.sh@42 -- # jq -r . 00:07:34.743 [2024-12-15 13:21:40.139016] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:34.743 [2024-12-15 13:21:40.139320] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71133 ] 00:07:34.743 [2024-12-15 13:21:40.275200] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.743 [2024-12-15 13:21:40.323166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.743 13:21:40 -- accel/accel.sh@21 -- # val= 00:07:34.743 13:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.743 13:21:40 -- accel/accel.sh@20 -- # IFS=: 00:07:34.743 13:21:40 -- accel/accel.sh@20 -- # read -r var val 00:07:34.743 13:21:40 -- accel/accel.sh@21 -- # val= 00:07:34.743 13:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.743 13:21:40 -- accel/accel.sh@20 -- # IFS=: 00:07:34.743 13:21:40 -- accel/accel.sh@20 -- # read -r var val 00:07:34.743 13:21:40 -- accel/accel.sh@21 -- # val= 00:07:34.743 13:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.743 13:21:40 -- accel/accel.sh@20 -- # IFS=: 00:07:34.743 13:21:40 -- accel/accel.sh@20 -- # read -r var val 00:07:34.743 13:21:40 -- accel/accel.sh@21 -- # val=0x1 00:07:34.743 13:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.743 13:21:40 -- accel/accel.sh@20 -- # IFS=: 00:07:34.743 13:21:40 -- accel/accel.sh@20 -- # read -r var val 00:07:34.743 13:21:40 -- accel/accel.sh@21 -- # val= 00:07:34.743 13:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.743 13:21:40 -- accel/accel.sh@20 -- # IFS=: 00:07:34.743 13:21:40 -- accel/accel.sh@20 -- # read -r var val 00:07:34.743 13:21:40 -- accel/accel.sh@21 -- # val= 00:07:34.743 13:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.743 13:21:40 -- accel/accel.sh@20 -- # IFS=: 00:07:34.743 13:21:40 -- accel/accel.sh@20 -- # read -r var val 00:07:34.743 13:21:40 -- accel/accel.sh@21 -- # val=compress 00:07:34.743 13:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.744 13:21:40 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:34.744 13:21:40 -- accel/accel.sh@20 -- # IFS=: 00:07:34.744 13:21:40 -- accel/accel.sh@20 -- # read -r var val 00:07:34.744 13:21:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:34.744 13:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.744 13:21:40 -- accel/accel.sh@20 -- # IFS=: 00:07:34.744 13:21:40 -- accel/accel.sh@20 -- # read -r var val 00:07:34.744 13:21:40 -- accel/accel.sh@21 -- # val= 00:07:34.744 13:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.744 13:21:40 -- accel/accel.sh@20 -- # IFS=: 00:07:34.744 13:21:40 -- accel/accel.sh@20 -- # read -r var val 00:07:34.744 13:21:40 -- accel/accel.sh@21 -- # val=software 00:07:34.744 13:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.744 13:21:40 -- accel/accel.sh@23 -- # accel_module=software 00:07:34.744 13:21:40 -- accel/accel.sh@20 -- # IFS=: 00:07:34.744 13:21:40 -- accel/accel.sh@20 -- # read -r var val 00:07:34.744 13:21:40 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:34.744 13:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.744 13:21:40 -- accel/accel.sh@20 -- # IFS=: 00:07:34.744 13:21:40 -- accel/accel.sh@20 -- # read -r var val 00:07:34.744 13:21:40 -- accel/accel.sh@21 -- # val=32 00:07:34.744 13:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.744 13:21:40 -- accel/accel.sh@20 -- # IFS=: 00:07:34.744 13:21:40 -- accel/accel.sh@20 -- # read -r var val 00:07:34.744 13:21:40 -- accel/accel.sh@21 -- # val=32 00:07:34.744 13:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.744 13:21:40 -- accel/accel.sh@20 -- # IFS=: 00:07:34.744 13:21:40 -- accel/accel.sh@20 -- # read -r var val 00:07:34.744 13:21:40 -- accel/accel.sh@21 -- # val=1 00:07:34.744 13:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.744 13:21:40 -- accel/accel.sh@20 -- # IFS=: 00:07:34.744 13:21:40 -- accel/accel.sh@20 -- # read -r var val 00:07:34.744 13:21:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:34.744 13:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.744 13:21:40 -- accel/accel.sh@20 -- # IFS=: 00:07:34.744 13:21:40 -- accel/accel.sh@20 -- # read -r var val 00:07:34.744 13:21:40 -- accel/accel.sh@21 -- # val=No 00:07:34.744 13:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.744 13:21:40 -- accel/accel.sh@20 -- # IFS=: 00:07:34.744 13:21:40 -- accel/accel.sh@20 -- # read -r var val 00:07:34.744 13:21:40 -- accel/accel.sh@21 -- # val= 00:07:34.744 13:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.744 13:21:40 -- accel/accel.sh@20 -- # IFS=: 00:07:34.744 13:21:40 -- accel/accel.sh@20 -- # read -r var val 00:07:34.744 13:21:40 -- accel/accel.sh@21 -- # val= 00:07:34.744 13:21:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.744 13:21:40 -- accel/accel.sh@20 -- # IFS=: 00:07:34.744 13:21:40 -- accel/accel.sh@20 -- # read -r var val 00:07:36.121 13:21:41 -- accel/accel.sh@21 -- # val= 00:07:36.121 13:21:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.121 13:21:41 -- accel/accel.sh@20 -- # IFS=: 00:07:36.121 13:21:41 -- accel/accel.sh@20 -- # read -r var val 00:07:36.121 13:21:41 -- accel/accel.sh@21 -- # val= 00:07:36.121 13:21:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.121 13:21:41 -- accel/accel.sh@20 -- # IFS=: 00:07:36.121 13:21:41 -- accel/accel.sh@20 -- # read -r var val 00:07:36.121 13:21:41 -- accel/accel.sh@21 -- # val= 00:07:36.121 13:21:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.121 13:21:41 -- accel/accel.sh@20 -- # IFS=: 00:07:36.121 13:21:41 -- accel/accel.sh@20 -- # read -r var val 00:07:36.121 13:21:41 -- accel/accel.sh@21 -- # val= 00:07:36.121 13:21:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.121 13:21:41 -- accel/accel.sh@20 -- # IFS=: 00:07:36.121 13:21:41 -- accel/accel.sh@20 -- # read -r var val 00:07:36.121 13:21:41 -- accel/accel.sh@21 -- # val= 00:07:36.121 13:21:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.121 13:21:41 -- accel/accel.sh@20 -- # IFS=: 00:07:36.121 13:21:41 -- accel/accel.sh@20 -- # read -r var val 00:07:36.121 ************************************ 00:07:36.121 END TEST accel_comp 00:07:36.121 ************************************ 00:07:36.121 13:21:41 -- accel/accel.sh@21 -- # val= 00:07:36.121 13:21:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.121 13:21:41 -- accel/accel.sh@20 -- # IFS=: 00:07:36.121 13:21:41 -- accel/accel.sh@20 -- # read -r var val 00:07:36.121 13:21:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:36.121 13:21:41 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:36.121 13:21:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.121 00:07:36.121 real 0m2.783s 00:07:36.121 user 0m2.377s 00:07:36.121 sys 0m0.202s 00:07:36.121 13:21:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:36.121 13:21:41 -- common/autotest_common.sh@10 -- # set +x 00:07:36.121 13:21:41 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:36.121 13:21:41 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:36.121 13:21:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.121 13:21:41 -- common/autotest_common.sh@10 -- # set +x 00:07:36.121 ************************************ 00:07:36.121 START TEST accel_decomp 00:07:36.121 ************************************ 00:07:36.121 13:21:41 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:36.121 13:21:41 -- accel/accel.sh@16 -- # local accel_opc 00:07:36.121 13:21:41 -- accel/accel.sh@17 -- # local accel_module 00:07:36.121 13:21:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:36.121 13:21:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:36.121 13:21:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.121 13:21:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.121 13:21:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.121 13:21:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.121 13:21:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.121 13:21:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.121 13:21:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.121 13:21:41 -- accel/accel.sh@42 -- # jq -r . 00:07:36.121 [2024-12-15 13:21:41.589204] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:36.121 [2024-12-15 13:21:41.589296] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71168 ] 00:07:36.121 [2024-12-15 13:21:41.721975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.121 [2024-12-15 13:21:41.768400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.499 13:21:42 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:37.499 00:07:37.499 SPDK Configuration: 00:07:37.499 Core mask: 0x1 00:07:37.499 00:07:37.499 Accel Perf Configuration: 00:07:37.499 Workload Type: decompress 00:07:37.499 Transfer size: 4096 bytes 00:07:37.499 Vector count 1 00:07:37.499 Module: software 00:07:37.499 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:37.499 Queue depth: 32 00:07:37.499 Allocate depth: 32 00:07:37.499 # threads/core: 1 00:07:37.499 Run time: 1 seconds 00:07:37.499 Verify: Yes 00:07:37.499 00:07:37.499 Running for 1 seconds... 00:07:37.499 00:07:37.499 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:37.499 ------------------------------------------------------------------------------------ 00:07:37.499 0,0 84768/s 156 MiB/s 0 0 00:07:37.499 ==================================================================================== 00:07:37.499 Total 84768/s 331 MiB/s 0 0' 00:07:37.499 13:21:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.499 13:21:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:37.499 13:21:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.499 13:21:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:37.499 13:21:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.499 13:21:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.499 13:21:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.499 13:21:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.499 13:21:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.499 13:21:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.499 13:21:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.499 13:21:42 -- accel/accel.sh@42 -- # jq -r . 00:07:37.499 [2024-12-15 13:21:42.974810] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:37.499 [2024-12-15 13:21:42.974906] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71182 ] 00:07:37.499 [2024-12-15 13:21:43.103989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.499 [2024-12-15 13:21:43.150112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.758 13:21:43 -- accel/accel.sh@21 -- # val= 00:07:37.758 13:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # IFS=: 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # read -r var val 00:07:37.758 13:21:43 -- accel/accel.sh@21 -- # val= 00:07:37.758 13:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # IFS=: 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # read -r var val 00:07:37.758 13:21:43 -- accel/accel.sh@21 -- # val= 00:07:37.758 13:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # IFS=: 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # read -r var val 00:07:37.758 13:21:43 -- accel/accel.sh@21 -- # val=0x1 00:07:37.758 13:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # IFS=: 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # read -r var val 00:07:37.758 13:21:43 -- accel/accel.sh@21 -- # val= 00:07:37.758 13:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # IFS=: 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # read -r var val 00:07:37.758 13:21:43 -- accel/accel.sh@21 -- # val= 00:07:37.758 13:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # IFS=: 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # read -r var val 00:07:37.758 13:21:43 -- accel/accel.sh@21 -- # val=decompress 00:07:37.758 13:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.758 13:21:43 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # IFS=: 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # read -r var val 00:07:37.758 13:21:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:37.758 13:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # IFS=: 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # read -r var val 00:07:37.758 13:21:43 -- accel/accel.sh@21 -- # val= 00:07:37.758 13:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # IFS=: 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # read -r var val 00:07:37.758 13:21:43 -- accel/accel.sh@21 -- # val=software 00:07:37.758 13:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.758 13:21:43 -- accel/accel.sh@23 -- # accel_module=software 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # IFS=: 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # read -r var val 00:07:37.758 13:21:43 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:37.758 13:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # IFS=: 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # read -r var val 00:07:37.758 13:21:43 -- accel/accel.sh@21 -- # val=32 00:07:37.758 13:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # IFS=: 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # read -r var val 00:07:37.758 13:21:43 -- accel/accel.sh@21 -- # val=32 00:07:37.758 13:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # IFS=: 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # read -r var val 00:07:37.758 13:21:43 -- accel/accel.sh@21 -- # val=1 00:07:37.758 13:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # IFS=: 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # read -r var val 00:07:37.758 13:21:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:37.758 13:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # IFS=: 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # read -r var val 00:07:37.758 13:21:43 -- accel/accel.sh@21 -- # val=Yes 00:07:37.758 13:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # IFS=: 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # read -r var val 00:07:37.758 13:21:43 -- accel/accel.sh@21 -- # val= 00:07:37.758 13:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # IFS=: 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # read -r var val 00:07:37.758 13:21:43 -- accel/accel.sh@21 -- # val= 00:07:37.758 13:21:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # IFS=: 00:07:37.758 13:21:43 -- accel/accel.sh@20 -- # read -r var val 00:07:38.694 13:21:44 -- accel/accel.sh@21 -- # val= 00:07:38.694 13:21:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.694 13:21:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.694 13:21:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.694 13:21:44 -- accel/accel.sh@21 -- # val= 00:07:38.694 13:21:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.694 13:21:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.694 13:21:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.694 13:21:44 -- accel/accel.sh@21 -- # val= 00:07:38.694 13:21:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.694 13:21:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.694 13:21:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.694 13:21:44 -- accel/accel.sh@21 -- # val= 00:07:38.694 13:21:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.694 13:21:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.694 13:21:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.694 13:21:44 -- accel/accel.sh@21 -- # val= 00:07:38.694 13:21:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.694 13:21:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.694 13:21:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.694 13:21:44 -- accel/accel.sh@21 -- # val= 00:07:38.694 13:21:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.694 13:21:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.694 13:21:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.694 13:21:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:38.694 ************************************ 00:07:38.694 END TEST accel_decomp 00:07:38.694 ************************************ 00:07:38.694 13:21:44 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:38.694 13:21:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.694 00:07:38.694 real 0m2.799s 00:07:38.694 user 0m2.387s 00:07:38.694 sys 0m0.208s 00:07:38.694 13:21:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:38.694 13:21:44 -- common/autotest_common.sh@10 -- # set +x 00:07:38.953 13:21:44 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:38.953 13:21:44 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:38.953 13:21:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.953 13:21:44 -- common/autotest_common.sh@10 -- # set +x 00:07:38.953 ************************************ 00:07:38.953 START TEST accel_decmop_full 00:07:38.953 ************************************ 00:07:38.953 13:21:44 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:38.953 13:21:44 -- accel/accel.sh@16 -- # local accel_opc 00:07:38.953 13:21:44 -- accel/accel.sh@17 -- # local accel_module 00:07:38.953 13:21:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:38.953 13:21:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:38.953 13:21:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.953 13:21:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.953 13:21:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.953 13:21:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.953 13:21:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.953 13:21:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.953 13:21:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.953 13:21:44 -- accel/accel.sh@42 -- # jq -r . 00:07:38.953 [2024-12-15 13:21:44.433220] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:38.953 [2024-12-15 13:21:44.433312] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71222 ] 00:07:38.953 [2024-12-15 13:21:44.568828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.953 [2024-12-15 13:21:44.622168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.328 13:21:45 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:40.328 00:07:40.328 SPDK Configuration: 00:07:40.328 Core mask: 0x1 00:07:40.328 00:07:40.328 Accel Perf Configuration: 00:07:40.328 Workload Type: decompress 00:07:40.328 Transfer size: 111250 bytes 00:07:40.328 Vector count 1 00:07:40.328 Module: software 00:07:40.328 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:40.328 Queue depth: 32 00:07:40.328 Allocate depth: 32 00:07:40.328 # threads/core: 1 00:07:40.328 Run time: 1 seconds 00:07:40.328 Verify: Yes 00:07:40.328 00:07:40.328 Running for 1 seconds... 00:07:40.328 00:07:40.328 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:40.328 ------------------------------------------------------------------------------------ 00:07:40.328 0,0 5696/s 235 MiB/s 0 0 00:07:40.328 ==================================================================================== 00:07:40.328 Total 5696/s 604 MiB/s 0 0' 00:07:40.328 13:21:45 -- accel/accel.sh@20 -- # IFS=: 00:07:40.328 13:21:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:40.328 13:21:45 -- accel/accel.sh@20 -- # read -r var val 00:07:40.328 13:21:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:40.328 13:21:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.328 13:21:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.328 13:21:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.328 13:21:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.328 13:21:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.328 13:21:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.328 13:21:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.328 13:21:45 -- accel/accel.sh@42 -- # jq -r . 00:07:40.328 [2024-12-15 13:21:45.838706] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:40.328 [2024-12-15 13:21:45.838808] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71236 ] 00:07:40.328 [2024-12-15 13:21:45.974773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.636 [2024-12-15 13:21:46.021268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.636 13:21:46 -- accel/accel.sh@21 -- # val= 00:07:40.636 13:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # IFS=: 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # read -r var val 00:07:40.636 13:21:46 -- accel/accel.sh@21 -- # val= 00:07:40.636 13:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # IFS=: 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # read -r var val 00:07:40.636 13:21:46 -- accel/accel.sh@21 -- # val= 00:07:40.636 13:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # IFS=: 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # read -r var val 00:07:40.636 13:21:46 -- accel/accel.sh@21 -- # val=0x1 00:07:40.636 13:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # IFS=: 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # read -r var val 00:07:40.636 13:21:46 -- accel/accel.sh@21 -- # val= 00:07:40.636 13:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # IFS=: 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # read -r var val 00:07:40.636 13:21:46 -- accel/accel.sh@21 -- # val= 00:07:40.636 13:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # IFS=: 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # read -r var val 00:07:40.636 13:21:46 -- accel/accel.sh@21 -- # val=decompress 00:07:40.636 13:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.636 13:21:46 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # IFS=: 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # read -r var val 00:07:40.636 13:21:46 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:40.636 13:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # IFS=: 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # read -r var val 00:07:40.636 13:21:46 -- accel/accel.sh@21 -- # val= 00:07:40.636 13:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # IFS=: 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # read -r var val 00:07:40.636 13:21:46 -- accel/accel.sh@21 -- # val=software 00:07:40.636 13:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.636 13:21:46 -- accel/accel.sh@23 -- # accel_module=software 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # IFS=: 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # read -r var val 00:07:40.636 13:21:46 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:40.636 13:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # IFS=: 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # read -r var val 00:07:40.636 13:21:46 -- accel/accel.sh@21 -- # val=32 00:07:40.636 13:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # IFS=: 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # read -r var val 00:07:40.636 13:21:46 -- accel/accel.sh@21 -- # val=32 00:07:40.636 13:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # IFS=: 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # read -r var val 00:07:40.636 13:21:46 -- accel/accel.sh@21 -- # val=1 00:07:40.636 13:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # IFS=: 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # read -r var val 00:07:40.636 13:21:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:40.636 13:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # IFS=: 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # read -r var val 00:07:40.636 13:21:46 -- accel/accel.sh@21 -- # val=Yes 00:07:40.636 13:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # IFS=: 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # read -r var val 00:07:40.636 13:21:46 -- accel/accel.sh@21 -- # val= 00:07:40.636 13:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # IFS=: 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # read -r var val 00:07:40.636 13:21:46 -- accel/accel.sh@21 -- # val= 00:07:40.636 13:21:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # IFS=: 00:07:40.636 13:21:46 -- accel/accel.sh@20 -- # read -r var val 00:07:41.597 13:21:47 -- accel/accel.sh@21 -- # val= 00:07:41.597 13:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.597 13:21:47 -- accel/accel.sh@20 -- # IFS=: 00:07:41.597 13:21:47 -- accel/accel.sh@20 -- # read -r var val 00:07:41.597 13:21:47 -- accel/accel.sh@21 -- # val= 00:07:41.597 13:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.597 13:21:47 -- accel/accel.sh@20 -- # IFS=: 00:07:41.597 13:21:47 -- accel/accel.sh@20 -- # read -r var val 00:07:41.597 13:21:47 -- accel/accel.sh@21 -- # val= 00:07:41.597 13:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.597 13:21:47 -- accel/accel.sh@20 -- # IFS=: 00:07:41.597 13:21:47 -- accel/accel.sh@20 -- # read -r var val 00:07:41.597 13:21:47 -- accel/accel.sh@21 -- # val= 00:07:41.597 13:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.597 13:21:47 -- accel/accel.sh@20 -- # IFS=: 00:07:41.597 13:21:47 -- accel/accel.sh@20 -- # read -r var val 00:07:41.597 13:21:47 -- accel/accel.sh@21 -- # val= 00:07:41.597 13:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.597 13:21:47 -- accel/accel.sh@20 -- # IFS=: 00:07:41.597 13:21:47 -- accel/accel.sh@20 -- # read -r var val 00:07:41.597 13:21:47 -- accel/accel.sh@21 -- # val= 00:07:41.597 13:21:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.597 13:21:47 -- accel/accel.sh@20 -- # IFS=: 00:07:41.597 13:21:47 -- accel/accel.sh@20 -- # read -r var val 00:07:41.597 13:21:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:41.597 13:21:47 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:41.597 13:21:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.597 00:07:41.597 real 0m2.807s 00:07:41.597 user 0m2.387s 00:07:41.597 sys 0m0.217s 00:07:41.597 13:21:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:41.597 13:21:47 -- common/autotest_common.sh@10 -- # set +x 00:07:41.597 ************************************ 00:07:41.597 END TEST accel_decmop_full 00:07:41.597 ************************************ 00:07:41.597 13:21:47 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:41.597 13:21:47 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:41.597 13:21:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:41.597 13:21:47 -- common/autotest_common.sh@10 -- # set +x 00:07:41.597 ************************************ 00:07:41.597 START TEST accel_decomp_mcore 00:07:41.597 ************************************ 00:07:41.597 13:21:47 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:41.597 13:21:47 -- accel/accel.sh@16 -- # local accel_opc 00:07:41.597 13:21:47 -- accel/accel.sh@17 -- # local accel_module 00:07:41.597 13:21:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:41.597 13:21:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:41.597 13:21:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.597 13:21:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:41.597 13:21:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.597 13:21:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.597 13:21:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:41.597 13:21:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:41.597 13:21:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:41.597 13:21:47 -- accel/accel.sh@42 -- # jq -r . 00:07:41.923 [2024-12-15 13:21:47.295926] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:41.923 [2024-12-15 13:21:47.296177] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71271 ] 00:07:41.923 [2024-12-15 13:21:47.433674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:41.923 [2024-12-15 13:21:47.482646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.923 [2024-12-15 13:21:47.482792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.923 [2024-12-15 13:21:47.482898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.923 [2024-12-15 13:21:47.483216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.391 13:21:48 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:43.391 00:07:43.391 SPDK Configuration: 00:07:43.391 Core mask: 0xf 00:07:43.391 00:07:43.391 Accel Perf Configuration: 00:07:43.391 Workload Type: decompress 00:07:43.391 Transfer size: 4096 bytes 00:07:43.391 Vector count 1 00:07:43.391 Module: software 00:07:43.391 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:43.391 Queue depth: 32 00:07:43.391 Allocate depth: 32 00:07:43.391 # threads/core: 1 00:07:43.391 Run time: 1 seconds 00:07:43.391 Verify: Yes 00:07:43.391 00:07:43.391 Running for 1 seconds... 00:07:43.391 00:07:43.391 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:43.391 ------------------------------------------------------------------------------------ 00:07:43.391 0,0 67968/s 125 MiB/s 0 0 00:07:43.391 3,0 66368/s 122 MiB/s 0 0 00:07:43.391 2,0 65600/s 120 MiB/s 0 0 00:07:43.391 1,0 64800/s 119 MiB/s 0 0 00:07:43.391 ==================================================================================== 00:07:43.391 Total 264736/s 1034 MiB/s 0 0' 00:07:43.391 13:21:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:43.391 13:21:48 -- accel/accel.sh@20 -- # IFS=: 00:07:43.391 13:21:48 -- accel/accel.sh@20 -- # read -r var val 00:07:43.391 13:21:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:43.391 13:21:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.391 13:21:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.391 13:21:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.391 13:21:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.391 13:21:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.391 13:21:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.391 13:21:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.391 13:21:48 -- accel/accel.sh@42 -- # jq -r . 00:07:43.391 [2024-12-15 13:21:48.722571] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:43.391 [2024-12-15 13:21:48.722694] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71293 ] 00:07:43.391 [2024-12-15 13:21:48.851184] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:43.391 [2024-12-15 13:21:48.899896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.391 [2024-12-15 13:21:48.900000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.391 [2024-12-15 13:21:48.900097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.391 [2024-12-15 13:21:48.900099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.391 13:21:48 -- accel/accel.sh@21 -- # val= 00:07:43.391 13:21:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.391 13:21:48 -- accel/accel.sh@20 -- # IFS=: 00:07:43.391 13:21:48 -- accel/accel.sh@20 -- # read -r var val 00:07:43.391 13:21:48 -- accel/accel.sh@21 -- # val= 00:07:43.391 13:21:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.391 13:21:48 -- accel/accel.sh@20 -- # IFS=: 00:07:43.391 13:21:48 -- accel/accel.sh@20 -- # read -r var val 00:07:43.391 13:21:48 -- accel/accel.sh@21 -- # val= 00:07:43.391 13:21:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.391 13:21:48 -- accel/accel.sh@20 -- # IFS=: 00:07:43.391 13:21:48 -- accel/accel.sh@20 -- # read -r var val 00:07:43.391 13:21:48 -- accel/accel.sh@21 -- # val=0xf 00:07:43.391 13:21:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.391 13:21:48 -- accel/accel.sh@20 -- # IFS=: 00:07:43.391 13:21:48 -- accel/accel.sh@20 -- # read -r var val 00:07:43.391 13:21:48 -- accel/accel.sh@21 -- # val= 00:07:43.391 13:21:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.391 13:21:48 -- accel/accel.sh@20 -- # IFS=: 00:07:43.391 13:21:48 -- accel/accel.sh@20 -- # read -r var val 00:07:43.391 13:21:48 -- accel/accel.sh@21 -- # val= 00:07:43.391 13:21:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.391 13:21:48 -- accel/accel.sh@20 -- # IFS=: 00:07:43.391 13:21:48 -- accel/accel.sh@20 -- # read -r var val 00:07:43.391 13:21:48 -- accel/accel.sh@21 -- # val=decompress 00:07:43.391 13:21:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.391 13:21:48 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:43.391 13:21:48 -- accel/accel.sh@20 -- # IFS=: 00:07:43.391 13:21:48 -- accel/accel.sh@20 -- # read -r var val 00:07:43.391 13:21:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:43.391 13:21:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.391 13:21:48 -- accel/accel.sh@20 -- # IFS=: 00:07:43.391 13:21:48 -- accel/accel.sh@20 -- # read -r var val 00:07:43.391 13:21:48 -- accel/accel.sh@21 -- # val= 00:07:43.391 13:21:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.391 13:21:48 -- accel/accel.sh@20 -- # IFS=: 00:07:43.391 13:21:48 -- accel/accel.sh@20 -- # read -r var val 00:07:43.391 13:21:48 -- accel/accel.sh@21 -- # val=software 00:07:43.391 13:21:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.391 13:21:48 -- accel/accel.sh@23 -- # accel_module=software 00:07:43.391 13:21:48 -- accel/accel.sh@20 -- # IFS=: 00:07:43.391 13:21:48 -- accel/accel.sh@20 -- # read -r var val 00:07:43.391 13:21:48 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:43.391 13:21:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.392 13:21:48 -- accel/accel.sh@20 -- # IFS=: 00:07:43.392 13:21:48 -- accel/accel.sh@20 -- # read -r var val 00:07:43.392 13:21:48 -- accel/accel.sh@21 -- # val=32 00:07:43.392 13:21:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.392 13:21:48 -- accel/accel.sh@20 -- # IFS=: 00:07:43.392 13:21:48 -- accel/accel.sh@20 -- # read -r var val 00:07:43.392 13:21:48 -- accel/accel.sh@21 -- # val=32 00:07:43.392 13:21:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.392 13:21:48 -- accel/accel.sh@20 -- # IFS=: 00:07:43.392 13:21:48 -- accel/accel.sh@20 -- # read -r var val 00:07:43.392 13:21:48 -- accel/accel.sh@21 -- # val=1 00:07:43.392 13:21:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.392 13:21:48 -- accel/accel.sh@20 -- # IFS=: 00:07:43.392 13:21:48 -- accel/accel.sh@20 -- # read -r var val 00:07:43.392 13:21:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:43.392 13:21:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.392 13:21:48 -- accel/accel.sh@20 -- # IFS=: 00:07:43.392 13:21:48 -- accel/accel.sh@20 -- # read -r var val 00:07:43.392 13:21:48 -- accel/accel.sh@21 -- # val=Yes 00:07:43.392 13:21:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.392 13:21:48 -- accel/accel.sh@20 -- # IFS=: 00:07:43.392 13:21:48 -- accel/accel.sh@20 -- # read -r var val 00:07:43.392 13:21:48 -- accel/accel.sh@21 -- # val= 00:07:43.392 13:21:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.392 13:21:48 -- accel/accel.sh@20 -- # IFS=: 00:07:43.392 13:21:48 -- accel/accel.sh@20 -- # read -r var val 00:07:43.392 13:21:48 -- accel/accel.sh@21 -- # val= 00:07:43.392 13:21:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.392 13:21:48 -- accel/accel.sh@20 -- # IFS=: 00:07:43.392 13:21:48 -- accel/accel.sh@20 -- # read -r var val 00:07:44.767 13:21:50 -- accel/accel.sh@21 -- # val= 00:07:44.767 13:21:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.767 13:21:50 -- accel/accel.sh@20 -- # IFS=: 00:07:44.767 13:21:50 -- accel/accel.sh@20 -- # read -r var val 00:07:44.767 13:21:50 -- accel/accel.sh@21 -- # val= 00:07:44.767 13:21:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.767 13:21:50 -- accel/accel.sh@20 -- # IFS=: 00:07:44.767 13:21:50 -- accel/accel.sh@20 -- # read -r var val 00:07:44.767 13:21:50 -- accel/accel.sh@21 -- # val= 00:07:44.767 13:21:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.767 13:21:50 -- accel/accel.sh@20 -- # IFS=: 00:07:44.767 13:21:50 -- accel/accel.sh@20 -- # read -r var val 00:07:44.767 13:21:50 -- accel/accel.sh@21 -- # val= 00:07:44.767 13:21:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.767 13:21:50 -- accel/accel.sh@20 -- # IFS=: 00:07:44.767 13:21:50 -- accel/accel.sh@20 -- # read -r var val 00:07:44.767 13:21:50 -- accel/accel.sh@21 -- # val= 00:07:44.767 13:21:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.767 13:21:50 -- accel/accel.sh@20 -- # IFS=: 00:07:44.767 13:21:50 -- accel/accel.sh@20 -- # read -r var val 00:07:44.767 13:21:50 -- accel/accel.sh@21 -- # val= 00:07:44.767 13:21:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.767 13:21:50 -- accel/accel.sh@20 -- # IFS=: 00:07:44.767 13:21:50 -- accel/accel.sh@20 -- # read -r var val 00:07:44.767 13:21:50 -- accel/accel.sh@21 -- # val= 00:07:44.767 13:21:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.767 13:21:50 -- accel/accel.sh@20 -- # IFS=: 00:07:44.767 13:21:50 -- accel/accel.sh@20 -- # read -r var val 00:07:44.767 13:21:50 -- accel/accel.sh@21 -- # val= 00:07:44.767 13:21:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.767 13:21:50 -- accel/accel.sh@20 -- # IFS=: 00:07:44.767 13:21:50 -- accel/accel.sh@20 -- # read -r var val 00:07:44.767 13:21:50 -- accel/accel.sh@21 -- # val= 00:07:44.767 13:21:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.767 13:21:50 -- accel/accel.sh@20 -- # IFS=: 00:07:44.767 13:21:50 -- accel/accel.sh@20 -- # read -r var val 00:07:44.767 13:21:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:44.767 13:21:50 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:44.767 13:21:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.767 00:07:44.767 real 0m2.837s 00:07:44.767 user 0m9.194s 00:07:44.767 sys 0m0.242s 00:07:44.767 13:21:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:44.767 13:21:50 -- common/autotest_common.sh@10 -- # set +x 00:07:44.767 ************************************ 00:07:44.767 END TEST accel_decomp_mcore 00:07:44.767 ************************************ 00:07:44.767 13:21:50 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:44.767 13:21:50 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:44.767 13:21:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:44.767 13:21:50 -- common/autotest_common.sh@10 -- # set +x 00:07:44.767 ************************************ 00:07:44.767 START TEST accel_decomp_full_mcore 00:07:44.767 ************************************ 00:07:44.768 13:21:50 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:44.768 13:21:50 -- accel/accel.sh@16 -- # local accel_opc 00:07:44.768 13:21:50 -- accel/accel.sh@17 -- # local accel_module 00:07:44.768 13:21:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:44.768 13:21:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:44.768 13:21:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.768 13:21:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:44.768 13:21:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.768 13:21:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.768 13:21:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:44.768 13:21:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:44.768 13:21:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:44.768 13:21:50 -- accel/accel.sh@42 -- # jq -r . 00:07:44.768 [2024-12-15 13:21:50.175347] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:44.768 [2024-12-15 13:21:50.175429] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71325 ] 00:07:44.768 [2024-12-15 13:21:50.305508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.768 [2024-12-15 13:21:50.363857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.768 [2024-12-15 13:21:50.363969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.768 [2024-12-15 13:21:50.364089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.768 [2024-12-15 13:21:50.364089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:46.144 13:21:51 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:46.144 00:07:46.144 SPDK Configuration: 00:07:46.144 Core mask: 0xf 00:07:46.144 00:07:46.144 Accel Perf Configuration: 00:07:46.144 Workload Type: decompress 00:07:46.144 Transfer size: 111250 bytes 00:07:46.144 Vector count 1 00:07:46.144 Module: software 00:07:46.144 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:46.144 Queue depth: 32 00:07:46.144 Allocate depth: 32 00:07:46.144 # threads/core: 1 00:07:46.144 Run time: 1 seconds 00:07:46.144 Verify: Yes 00:07:46.144 00:07:46.144 Running for 1 seconds... 00:07:46.144 00:07:46.144 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:46.144 ------------------------------------------------------------------------------------ 00:07:46.144 0,0 5120/s 211 MiB/s 0 0 00:07:46.144 3,0 5120/s 211 MiB/s 0 0 00:07:46.144 2,0 5120/s 211 MiB/s 0 0 00:07:46.144 1,0 5056/s 208 MiB/s 0 0 00:07:46.144 ==================================================================================== 00:07:46.144 Total 20416/s 2166 MiB/s 0 0' 00:07:46.144 13:21:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.144 13:21:51 -- accel/accel.sh@20 -- # IFS=: 00:07:46.144 13:21:51 -- accel/accel.sh@20 -- # read -r var val 00:07:46.144 13:21:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.144 13:21:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:46.144 13:21:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:46.144 13:21:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.144 13:21:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.144 13:21:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:46.144 13:21:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:46.144 13:21:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:46.144 13:21:51 -- accel/accel.sh@42 -- # jq -r . 00:07:46.144 [2024-12-15 13:21:51.585249] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:46.144 [2024-12-15 13:21:51.585642] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71348 ] 00:07:46.144 [2024-12-15 13:21:51.716716] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:46.144 [2024-12-15 13:21:51.767826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.144 [2024-12-15 13:21:51.767980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.144 [2024-12-15 13:21:51.768072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:46.144 [2024-12-15 13:21:51.768354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.144 13:21:51 -- accel/accel.sh@21 -- # val= 00:07:46.144 13:21:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.144 13:21:51 -- accel/accel.sh@20 -- # IFS=: 00:07:46.144 13:21:51 -- accel/accel.sh@20 -- # read -r var val 00:07:46.144 13:21:51 -- accel/accel.sh@21 -- # val= 00:07:46.144 13:21:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.144 13:21:51 -- accel/accel.sh@20 -- # IFS=: 00:07:46.144 13:21:51 -- accel/accel.sh@20 -- # read -r var val 00:07:46.144 13:21:51 -- accel/accel.sh@21 -- # val= 00:07:46.144 13:21:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.144 13:21:51 -- accel/accel.sh@20 -- # IFS=: 00:07:46.144 13:21:51 -- accel/accel.sh@20 -- # read -r var val 00:07:46.144 13:21:51 -- accel/accel.sh@21 -- # val=0xf 00:07:46.144 13:21:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.144 13:21:51 -- accel/accel.sh@20 -- # IFS=: 00:07:46.144 13:21:51 -- accel/accel.sh@20 -- # read -r var val 00:07:46.144 13:21:51 -- accel/accel.sh@21 -- # val= 00:07:46.144 13:21:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.144 13:21:51 -- accel/accel.sh@20 -- # IFS=: 00:07:46.144 13:21:51 -- accel/accel.sh@20 -- # read -r var val 00:07:46.144 13:21:51 -- accel/accel.sh@21 -- # val= 00:07:46.144 13:21:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.144 13:21:51 -- accel/accel.sh@20 -- # IFS=: 00:07:46.144 13:21:51 -- accel/accel.sh@20 -- # read -r var val 00:07:46.144 13:21:51 -- accel/accel.sh@21 -- # val=decompress 00:07:46.144 13:21:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.144 13:21:51 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:46.144 13:21:51 -- accel/accel.sh@20 -- # IFS=: 00:07:46.144 13:21:51 -- accel/accel.sh@20 -- # read -r var val 00:07:46.409 13:21:51 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:46.409 13:21:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.409 13:21:51 -- accel/accel.sh@20 -- # IFS=: 00:07:46.409 13:21:51 -- accel/accel.sh@20 -- # read -r var val 00:07:46.409 13:21:51 -- accel/accel.sh@21 -- # val= 00:07:46.409 13:21:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.409 13:21:51 -- accel/accel.sh@20 -- # IFS=: 00:07:46.409 13:21:51 -- accel/accel.sh@20 -- # read -r var val 00:07:46.409 13:21:51 -- accel/accel.sh@21 -- # val=software 00:07:46.409 13:21:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.409 13:21:51 -- accel/accel.sh@23 -- # accel_module=software 00:07:46.409 13:21:51 -- accel/accel.sh@20 -- # IFS=: 00:07:46.409 13:21:51 -- accel/accel.sh@20 -- # read -r var val 00:07:46.409 13:21:51 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:46.409 13:21:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.409 13:21:51 -- accel/accel.sh@20 -- # IFS=: 00:07:46.409 13:21:51 -- accel/accel.sh@20 -- # read -r var val 00:07:46.409 13:21:51 -- accel/accel.sh@21 -- # val=32 00:07:46.409 13:21:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.409 13:21:51 -- accel/accel.sh@20 -- # IFS=: 00:07:46.409 13:21:51 -- accel/accel.sh@20 -- # read -r var val 00:07:46.409 13:21:51 -- accel/accel.sh@21 -- # val=32 00:07:46.409 13:21:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.409 13:21:51 -- accel/accel.sh@20 -- # IFS=: 00:07:46.409 13:21:51 -- accel/accel.sh@20 -- # read -r var val 00:07:46.409 13:21:51 -- accel/accel.sh@21 -- # val=1 00:07:46.409 13:21:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.409 13:21:51 -- accel/accel.sh@20 -- # IFS=: 00:07:46.409 13:21:51 -- accel/accel.sh@20 -- # read -r var val 00:07:46.409 13:21:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:46.409 13:21:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.409 13:21:51 -- accel/accel.sh@20 -- # IFS=: 00:07:46.409 13:21:51 -- accel/accel.sh@20 -- # read -r var val 00:07:46.409 13:21:51 -- accel/accel.sh@21 -- # val=Yes 00:07:46.409 13:21:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.409 13:21:51 -- accel/accel.sh@20 -- # IFS=: 00:07:46.409 13:21:51 -- accel/accel.sh@20 -- # read -r var val 00:07:46.409 13:21:51 -- accel/accel.sh@21 -- # val= 00:07:46.409 13:21:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.409 13:21:51 -- accel/accel.sh@20 -- # IFS=: 00:07:46.409 13:21:51 -- accel/accel.sh@20 -- # read -r var val 00:07:46.409 13:21:51 -- accel/accel.sh@21 -- # val= 00:07:46.409 13:21:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.409 13:21:51 -- accel/accel.sh@20 -- # IFS=: 00:07:46.409 13:21:51 -- accel/accel.sh@20 -- # read -r var val 00:07:47.344 13:21:52 -- accel/accel.sh@21 -- # val= 00:07:47.344 13:21:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.344 13:21:52 -- accel/accel.sh@20 -- # IFS=: 00:07:47.344 13:21:52 -- accel/accel.sh@20 -- # read -r var val 00:07:47.344 13:21:52 -- accel/accel.sh@21 -- # val= 00:07:47.344 13:21:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.344 13:21:52 -- accel/accel.sh@20 -- # IFS=: 00:07:47.344 13:21:52 -- accel/accel.sh@20 -- # read -r var val 00:07:47.344 13:21:52 -- accel/accel.sh@21 -- # val= 00:07:47.344 13:21:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.344 13:21:52 -- accel/accel.sh@20 -- # IFS=: 00:07:47.344 13:21:52 -- accel/accel.sh@20 -- # read -r var val 00:07:47.344 13:21:52 -- accel/accel.sh@21 -- # val= 00:07:47.344 13:21:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.344 13:21:52 -- accel/accel.sh@20 -- # IFS=: 00:07:47.344 13:21:52 -- accel/accel.sh@20 -- # read -r var val 00:07:47.344 13:21:52 -- accel/accel.sh@21 -- # val= 00:07:47.344 13:21:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.344 13:21:52 -- accel/accel.sh@20 -- # IFS=: 00:07:47.344 13:21:52 -- accel/accel.sh@20 -- # read -r var val 00:07:47.344 13:21:52 -- accel/accel.sh@21 -- # val= 00:07:47.344 13:21:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.344 13:21:52 -- accel/accel.sh@20 -- # IFS=: 00:07:47.344 13:21:52 -- accel/accel.sh@20 -- # read -r var val 00:07:47.344 13:21:52 -- accel/accel.sh@21 -- # val= 00:07:47.344 13:21:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.344 13:21:52 -- accel/accel.sh@20 -- # IFS=: 00:07:47.344 13:21:52 -- accel/accel.sh@20 -- # read -r var val 00:07:47.344 13:21:52 -- accel/accel.sh@21 -- # val= 00:07:47.344 13:21:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.344 13:21:52 -- accel/accel.sh@20 -- # IFS=: 00:07:47.344 13:21:52 -- accel/accel.sh@20 -- # read -r var val 00:07:47.344 13:21:52 -- accel/accel.sh@21 -- # val= 00:07:47.344 ************************************ 00:07:47.344 END TEST accel_decomp_full_mcore 00:07:47.344 ************************************ 00:07:47.344 13:21:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.344 13:21:52 -- accel/accel.sh@20 -- # IFS=: 00:07:47.344 13:21:52 -- accel/accel.sh@20 -- # read -r var val 00:07:47.344 13:21:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:47.344 13:21:52 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:47.344 13:21:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.344 00:07:47.344 real 0m2.820s 00:07:47.344 user 0m9.215s 00:07:47.344 sys 0m0.240s 00:07:47.344 13:21:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:47.344 13:21:52 -- common/autotest_common.sh@10 -- # set +x 00:07:47.344 13:21:53 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:47.344 13:21:53 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:47.344 13:21:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:47.344 13:21:53 -- common/autotest_common.sh@10 -- # set +x 00:07:47.344 ************************************ 00:07:47.344 START TEST accel_decomp_mthread 00:07:47.344 ************************************ 00:07:47.344 13:21:53 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:47.344 13:21:53 -- accel/accel.sh@16 -- # local accel_opc 00:07:47.344 13:21:53 -- accel/accel.sh@17 -- # local accel_module 00:07:47.344 13:21:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:47.344 13:21:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:47.344 13:21:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.344 13:21:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:47.344 13:21:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.344 13:21:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.344 13:21:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:47.344 13:21:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:47.344 13:21:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:47.344 13:21:53 -- accel/accel.sh@42 -- # jq -r . 00:07:47.602 [2024-12-15 13:21:53.050444] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:47.602 [2024-12-15 13:21:53.050539] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71385 ] 00:07:47.603 [2024-12-15 13:21:53.188211] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.603 [2024-12-15 13:21:53.233940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.979 13:21:54 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:48.979 00:07:48.979 SPDK Configuration: 00:07:48.979 Core mask: 0x1 00:07:48.979 00:07:48.979 Accel Perf Configuration: 00:07:48.979 Workload Type: decompress 00:07:48.979 Transfer size: 4096 bytes 00:07:48.979 Vector count 1 00:07:48.979 Module: software 00:07:48.979 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:48.979 Queue depth: 32 00:07:48.979 Allocate depth: 32 00:07:48.979 # threads/core: 2 00:07:48.979 Run time: 1 seconds 00:07:48.979 Verify: Yes 00:07:48.979 00:07:48.979 Running for 1 seconds... 00:07:48.979 00:07:48.979 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:48.979 ------------------------------------------------------------------------------------ 00:07:48.979 0,1 42784/s 78 MiB/s 0 0 00:07:48.979 0,0 42624/s 78 MiB/s 0 0 00:07:48.979 ==================================================================================== 00:07:48.979 Total 85408/s 333 MiB/s 0 0' 00:07:48.979 13:21:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:48.979 13:21:54 -- accel/accel.sh@20 -- # IFS=: 00:07:48.979 13:21:54 -- accel/accel.sh@20 -- # read -r var val 00:07:48.979 13:21:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:48.979 13:21:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:48.979 13:21:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:48.979 13:21:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.979 13:21:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.979 13:21:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:48.979 13:21:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:48.979 13:21:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:48.979 13:21:54 -- accel/accel.sh@42 -- # jq -r . 00:07:48.979 [2024-12-15 13:21:54.449174] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:48.979 [2024-12-15 13:21:54.449270] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71405 ] 00:07:48.979 [2024-12-15 13:21:54.581869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.979 [2024-12-15 13:21:54.630322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.243 13:21:54 -- accel/accel.sh@21 -- # val= 00:07:49.243 13:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # read -r var val 00:07:49.243 13:21:54 -- accel/accel.sh@21 -- # val= 00:07:49.243 13:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # read -r var val 00:07:49.243 13:21:54 -- accel/accel.sh@21 -- # val= 00:07:49.243 13:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # read -r var val 00:07:49.243 13:21:54 -- accel/accel.sh@21 -- # val=0x1 00:07:49.243 13:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # read -r var val 00:07:49.243 13:21:54 -- accel/accel.sh@21 -- # val= 00:07:49.243 13:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # read -r var val 00:07:49.243 13:21:54 -- accel/accel.sh@21 -- # val= 00:07:49.243 13:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # read -r var val 00:07:49.243 13:21:54 -- accel/accel.sh@21 -- # val=decompress 00:07:49.243 13:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.243 13:21:54 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # read -r var val 00:07:49.243 13:21:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:49.243 13:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # read -r var val 00:07:49.243 13:21:54 -- accel/accel.sh@21 -- # val= 00:07:49.243 13:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # read -r var val 00:07:49.243 13:21:54 -- accel/accel.sh@21 -- # val=software 00:07:49.243 13:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.243 13:21:54 -- accel/accel.sh@23 -- # accel_module=software 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # read -r var val 00:07:49.243 13:21:54 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:49.243 13:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # read -r var val 00:07:49.243 13:21:54 -- accel/accel.sh@21 -- # val=32 00:07:49.243 13:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # read -r var val 00:07:49.243 13:21:54 -- accel/accel.sh@21 -- # val=32 00:07:49.243 13:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # read -r var val 00:07:49.243 13:21:54 -- accel/accel.sh@21 -- # val=2 00:07:49.243 13:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # read -r var val 00:07:49.243 13:21:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:49.243 13:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # read -r var val 00:07:49.243 13:21:54 -- accel/accel.sh@21 -- # val=Yes 00:07:49.243 13:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # read -r var val 00:07:49.243 13:21:54 -- accel/accel.sh@21 -- # val= 00:07:49.243 13:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # read -r var val 00:07:49.243 13:21:54 -- accel/accel.sh@21 -- # val= 00:07:49.243 13:21:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # IFS=: 00:07:49.243 13:21:54 -- accel/accel.sh@20 -- # read -r var val 00:07:50.183 13:21:55 -- accel/accel.sh@21 -- # val= 00:07:50.183 13:21:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.183 13:21:55 -- accel/accel.sh@20 -- # IFS=: 00:07:50.183 13:21:55 -- accel/accel.sh@20 -- # read -r var val 00:07:50.183 13:21:55 -- accel/accel.sh@21 -- # val= 00:07:50.183 13:21:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.183 13:21:55 -- accel/accel.sh@20 -- # IFS=: 00:07:50.184 13:21:55 -- accel/accel.sh@20 -- # read -r var val 00:07:50.184 13:21:55 -- accel/accel.sh@21 -- # val= 00:07:50.184 13:21:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.184 13:21:55 -- accel/accel.sh@20 -- # IFS=: 00:07:50.184 13:21:55 -- accel/accel.sh@20 -- # read -r var val 00:07:50.184 13:21:55 -- accel/accel.sh@21 -- # val= 00:07:50.184 13:21:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.184 13:21:55 -- accel/accel.sh@20 -- # IFS=: 00:07:50.184 13:21:55 -- accel/accel.sh@20 -- # read -r var val 00:07:50.184 13:21:55 -- accel/accel.sh@21 -- # val= 00:07:50.184 13:21:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.184 13:21:55 -- accel/accel.sh@20 -- # IFS=: 00:07:50.184 13:21:55 -- accel/accel.sh@20 -- # read -r var val 00:07:50.184 13:21:55 -- accel/accel.sh@21 -- # val= 00:07:50.184 13:21:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.184 13:21:55 -- accel/accel.sh@20 -- # IFS=: 00:07:50.184 13:21:55 -- accel/accel.sh@20 -- # read -r var val 00:07:50.184 13:21:55 -- accel/accel.sh@21 -- # val= 00:07:50.184 13:21:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.184 13:21:55 -- accel/accel.sh@20 -- # IFS=: 00:07:50.184 13:21:55 -- accel/accel.sh@20 -- # read -r var val 00:07:50.184 13:21:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:50.184 13:21:55 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:50.184 ************************************ 00:07:50.184 END TEST accel_decomp_mthread 00:07:50.184 ************************************ 00:07:50.184 13:21:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.184 00:07:50.184 real 0m2.814s 00:07:50.184 user 0m2.393s 00:07:50.184 sys 0m0.221s 00:07:50.184 13:21:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:50.184 13:21:55 -- common/autotest_common.sh@10 -- # set +x 00:07:50.442 13:21:55 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:50.442 13:21:55 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:50.442 13:21:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.443 13:21:55 -- common/autotest_common.sh@10 -- # set +x 00:07:50.443 ************************************ 00:07:50.443 START TEST accel_deomp_full_mthread 00:07:50.443 ************************************ 00:07:50.443 13:21:55 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:50.443 13:21:55 -- accel/accel.sh@16 -- # local accel_opc 00:07:50.443 13:21:55 -- accel/accel.sh@17 -- # local accel_module 00:07:50.443 13:21:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:50.443 13:21:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:50.443 13:21:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.443 13:21:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:50.443 13:21:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.443 13:21:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.443 13:21:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:50.443 13:21:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:50.443 13:21:55 -- accel/accel.sh@41 -- # local IFS=, 00:07:50.443 13:21:55 -- accel/accel.sh@42 -- # jq -r . 00:07:50.443 [2024-12-15 13:21:55.913905] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:50.443 [2024-12-15 13:21:55.914163] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71438 ] 00:07:50.443 [2024-12-15 13:21:56.046262] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.443 [2024-12-15 13:21:56.094106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.819 13:21:57 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:51.819 00:07:51.819 SPDK Configuration: 00:07:51.819 Core mask: 0x1 00:07:51.819 00:07:51.819 Accel Perf Configuration: 00:07:51.819 Workload Type: decompress 00:07:51.819 Transfer size: 111250 bytes 00:07:51.819 Vector count 1 00:07:51.819 Module: software 00:07:51.819 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:51.819 Queue depth: 32 00:07:51.819 Allocate depth: 32 00:07:51.819 # threads/core: 2 00:07:51.819 Run time: 1 seconds 00:07:51.819 Verify: Yes 00:07:51.819 00:07:51.819 Running for 1 seconds... 00:07:51.819 00:07:51.819 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:51.819 ------------------------------------------------------------------------------------ 00:07:51.819 0,1 2848/s 117 MiB/s 0 0 00:07:51.819 0,0 2848/s 117 MiB/s 0 0 00:07:51.819 ==================================================================================== 00:07:51.819 Total 5696/s 604 MiB/s 0 0' 00:07:51.819 13:21:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:51.819 13:21:57 -- accel/accel.sh@20 -- # IFS=: 00:07:51.819 13:21:57 -- accel/accel.sh@20 -- # read -r var val 00:07:51.819 13:21:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:51.819 13:21:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:51.819 13:21:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:51.819 13:21:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.819 13:21:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.819 13:21:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:51.819 13:21:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:51.819 13:21:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:51.819 13:21:57 -- accel/accel.sh@42 -- # jq -r . 00:07:51.819 [2024-12-15 13:21:57.337829] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:51.819 [2024-12-15 13:21:57.337939] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71453 ] 00:07:51.819 [2024-12-15 13:21:57.477910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.078 [2024-12-15 13:21:57.527483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.078 13:21:57 -- accel/accel.sh@21 -- # val= 00:07:52.078 13:21:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # IFS=: 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # read -r var val 00:07:52.078 13:21:57 -- accel/accel.sh@21 -- # val= 00:07:52.078 13:21:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # IFS=: 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # read -r var val 00:07:52.078 13:21:57 -- accel/accel.sh@21 -- # val= 00:07:52.078 13:21:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # IFS=: 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # read -r var val 00:07:52.078 13:21:57 -- accel/accel.sh@21 -- # val=0x1 00:07:52.078 13:21:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # IFS=: 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # read -r var val 00:07:52.078 13:21:57 -- accel/accel.sh@21 -- # val= 00:07:52.078 13:21:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # IFS=: 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # read -r var val 00:07:52.078 13:21:57 -- accel/accel.sh@21 -- # val= 00:07:52.078 13:21:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # IFS=: 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # read -r var val 00:07:52.078 13:21:57 -- accel/accel.sh@21 -- # val=decompress 00:07:52.078 13:21:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.078 13:21:57 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # IFS=: 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # read -r var val 00:07:52.078 13:21:57 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:52.078 13:21:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # IFS=: 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # read -r var val 00:07:52.078 13:21:57 -- accel/accel.sh@21 -- # val= 00:07:52.078 13:21:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # IFS=: 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # read -r var val 00:07:52.078 13:21:57 -- accel/accel.sh@21 -- # val=software 00:07:52.078 13:21:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.078 13:21:57 -- accel/accel.sh@23 -- # accel_module=software 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # IFS=: 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # read -r var val 00:07:52.078 13:21:57 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:52.078 13:21:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # IFS=: 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # read -r var val 00:07:52.078 13:21:57 -- accel/accel.sh@21 -- # val=32 00:07:52.078 13:21:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # IFS=: 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # read -r var val 00:07:52.078 13:21:57 -- accel/accel.sh@21 -- # val=32 00:07:52.078 13:21:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # IFS=: 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # read -r var val 00:07:52.078 13:21:57 -- accel/accel.sh@21 -- # val=2 00:07:52.078 13:21:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # IFS=: 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # read -r var val 00:07:52.078 13:21:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:52.078 13:21:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # IFS=: 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # read -r var val 00:07:52.078 13:21:57 -- accel/accel.sh@21 -- # val=Yes 00:07:52.078 13:21:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # IFS=: 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # read -r var val 00:07:52.078 13:21:57 -- accel/accel.sh@21 -- # val= 00:07:52.078 13:21:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # IFS=: 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # read -r var val 00:07:52.078 13:21:57 -- accel/accel.sh@21 -- # val= 00:07:52.078 13:21:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # IFS=: 00:07:52.078 13:21:57 -- accel/accel.sh@20 -- # read -r var val 00:07:53.454 13:21:58 -- accel/accel.sh@21 -- # val= 00:07:53.454 13:21:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.454 13:21:58 -- accel/accel.sh@20 -- # IFS=: 00:07:53.454 13:21:58 -- accel/accel.sh@20 -- # read -r var val 00:07:53.454 13:21:58 -- accel/accel.sh@21 -- # val= 00:07:53.454 13:21:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.454 13:21:58 -- accel/accel.sh@20 -- # IFS=: 00:07:53.454 13:21:58 -- accel/accel.sh@20 -- # read -r var val 00:07:53.454 13:21:58 -- accel/accel.sh@21 -- # val= 00:07:53.454 13:21:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.454 13:21:58 -- accel/accel.sh@20 -- # IFS=: 00:07:53.454 13:21:58 -- accel/accel.sh@20 -- # read -r var val 00:07:53.454 13:21:58 -- accel/accel.sh@21 -- # val= 00:07:53.454 13:21:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.454 13:21:58 -- accel/accel.sh@20 -- # IFS=: 00:07:53.454 13:21:58 -- accel/accel.sh@20 -- # read -r var val 00:07:53.454 13:21:58 -- accel/accel.sh@21 -- # val= 00:07:53.454 13:21:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.454 13:21:58 -- accel/accel.sh@20 -- # IFS=: 00:07:53.454 13:21:58 -- accel/accel.sh@20 -- # read -r var val 00:07:53.454 13:21:58 -- accel/accel.sh@21 -- # val= 00:07:53.454 13:21:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.454 13:21:58 -- accel/accel.sh@20 -- # IFS=: 00:07:53.454 13:21:58 -- accel/accel.sh@20 -- # read -r var val 00:07:53.454 13:21:58 -- accel/accel.sh@21 -- # val= 00:07:53.454 13:21:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.454 13:21:58 -- accel/accel.sh@20 -- # IFS=: 00:07:53.454 13:21:58 -- accel/accel.sh@20 -- # read -r var val 00:07:53.454 13:21:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:53.454 13:21:58 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:53.454 13:21:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:53.454 00:07:53.454 real 0m2.846s 00:07:53.454 user 0m2.434s 00:07:53.454 sys 0m0.211s 00:07:53.454 13:21:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:53.454 13:21:58 -- common/autotest_common.sh@10 -- # set +x 00:07:53.454 ************************************ 00:07:53.454 END TEST accel_deomp_full_mthread 00:07:53.454 ************************************ 00:07:53.454 13:21:58 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:53.454 13:21:58 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:53.454 13:21:58 -- accel/accel.sh@129 -- # build_accel_config 00:07:53.454 13:21:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:53.454 13:21:58 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:53.454 13:21:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:53.454 13:21:58 -- common/autotest_common.sh@10 -- # set +x 00:07:53.454 13:21:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.454 13:21:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.454 13:21:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:53.454 13:21:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:53.454 13:21:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:53.454 13:21:58 -- accel/accel.sh@42 -- # jq -r . 00:07:53.454 ************************************ 00:07:53.454 START TEST accel_dif_functional_tests 00:07:53.454 ************************************ 00:07:53.454 13:21:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:53.454 [2024-12-15 13:21:58.849730] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:53.454 [2024-12-15 13:21:58.849823] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71489 ] 00:07:53.454 [2024-12-15 13:21:58.991787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:53.454 [2024-12-15 13:21:59.058394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.454 [2024-12-15 13:21:59.058536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.454 [2024-12-15 13:21:59.058541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.713 00:07:53.713 00:07:53.713 CUnit - A unit testing framework for C - Version 2.1-3 00:07:53.713 http://cunit.sourceforge.net/ 00:07:53.713 00:07:53.713 00:07:53.713 Suite: accel_dif 00:07:53.713 Test: verify: DIF generated, GUARD check ...passed 00:07:53.713 Test: verify: DIF generated, APPTAG check ...passed 00:07:53.713 Test: verify: DIF generated, REFTAG check ...passed 00:07:53.713 Test: verify: DIF not generated, GUARD check ...passed 00:07:53.713 Test: verify: DIF not generated, APPTAG check ...[2024-12-15 13:21:59.153227] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:53.713 [2024-12-15 13:21:59.153308] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:53.713 [2024-12-15 13:21:59.153352] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:53.713 passed 00:07:53.713 Test: verify: DIF not generated, REFTAG check ...passed 00:07:53.713 Test: verify: APPTAG correct, APPTAG check ...[2024-12-15 13:21:59.153380] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:53.713 [2024-12-15 13:21:59.153412] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:53.713 [2024-12-15 13:21:59.153438] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:53.713 passed 00:07:53.713 Test: verify: APPTAG incorrect, APPTAG check ...[2024-12-15 13:21:59.153660] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:53.713 passed 00:07:53.713 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:53.713 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:53.713 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:53.713 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:53.713 Test: generate copy: DIF generated, GUARD check ...[2024-12-15 13:21:59.153867] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:53.713 passed 00:07:53.713 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:53.713 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:53.713 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:53.713 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:53.713 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:53.713 Test: generate copy: iovecs-len validate ...[2024-12-15 13:21:59.154488] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:53.713 passed 00:07:53.713 Test: generate copy: buffer alignment validate ...passed 00:07:53.713 00:07:53.713 Run Summary: Type Total Ran Passed Failed Inactive 00:07:53.713 suites 1 1 n/a 0 0 00:07:53.713 tests 20 20 20 0 0 00:07:53.713 asserts 204 204 204 0 n/a 00:07:53.713 00:07:53.713 Elapsed time = 0.005 seconds 00:07:53.713 00:07:53.713 real 0m0.547s 00:07:53.713 user 0m0.727s 00:07:53.713 sys 0m0.165s 00:07:53.713 ************************************ 00:07:53.713 END TEST accel_dif_functional_tests 00:07:53.713 ************************************ 00:07:53.713 13:21:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:53.713 13:21:59 -- common/autotest_common.sh@10 -- # set +x 00:07:53.713 ************************************ 00:07:53.713 END TEST accel 00:07:53.713 ************************************ 00:07:53.713 00:07:53.713 real 1m0.339s 00:07:53.713 user 1m4.888s 00:07:53.713 sys 0m5.831s 00:07:53.713 13:21:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:53.713 13:21:59 -- common/autotest_common.sh@10 -- # set +x 00:07:53.972 13:21:59 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:53.972 13:21:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:53.972 13:21:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:53.972 13:21:59 -- common/autotest_common.sh@10 -- # set +x 00:07:53.972 ************************************ 00:07:53.972 START TEST accel_rpc 00:07:53.972 ************************************ 00:07:53.972 13:21:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:53.972 * Looking for test storage... 00:07:53.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:53.972 13:21:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:53.972 13:21:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:53.972 13:21:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:53.972 13:21:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:53.972 13:21:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:53.972 13:21:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:53.972 13:21:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:53.972 13:21:59 -- scripts/common.sh@335 -- # IFS=.-: 00:07:53.972 13:21:59 -- scripts/common.sh@335 -- # read -ra ver1 00:07:53.972 13:21:59 -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.972 13:21:59 -- scripts/common.sh@336 -- # read -ra ver2 00:07:53.972 13:21:59 -- scripts/common.sh@337 -- # local 'op=<' 00:07:53.972 13:21:59 -- scripts/common.sh@339 -- # ver1_l=2 00:07:53.973 13:21:59 -- scripts/common.sh@340 -- # ver2_l=1 00:07:53.973 13:21:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:53.973 13:21:59 -- scripts/common.sh@343 -- # case "$op" in 00:07:53.973 13:21:59 -- scripts/common.sh@344 -- # : 1 00:07:53.973 13:21:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:53.973 13:21:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.973 13:21:59 -- scripts/common.sh@364 -- # decimal 1 00:07:53.973 13:21:59 -- scripts/common.sh@352 -- # local d=1 00:07:53.973 13:21:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.973 13:21:59 -- scripts/common.sh@354 -- # echo 1 00:07:53.973 13:21:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:53.973 13:21:59 -- scripts/common.sh@365 -- # decimal 2 00:07:53.973 13:21:59 -- scripts/common.sh@352 -- # local d=2 00:07:53.973 13:21:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.973 13:21:59 -- scripts/common.sh@354 -- # echo 2 00:07:53.973 13:21:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:53.973 13:21:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:53.973 13:21:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:53.973 13:21:59 -- scripts/common.sh@367 -- # return 0 00:07:53.973 13:21:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.973 13:21:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:53.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.973 --rc genhtml_branch_coverage=1 00:07:53.973 --rc genhtml_function_coverage=1 00:07:53.973 --rc genhtml_legend=1 00:07:53.973 --rc geninfo_all_blocks=1 00:07:53.973 --rc geninfo_unexecuted_blocks=1 00:07:53.973 00:07:53.973 ' 00:07:53.973 13:21:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:53.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.973 --rc genhtml_branch_coverage=1 00:07:53.973 --rc genhtml_function_coverage=1 00:07:53.973 --rc genhtml_legend=1 00:07:53.973 --rc geninfo_all_blocks=1 00:07:53.973 --rc geninfo_unexecuted_blocks=1 00:07:53.973 00:07:53.973 ' 00:07:53.973 13:21:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:53.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.973 --rc genhtml_branch_coverage=1 00:07:53.973 --rc genhtml_function_coverage=1 00:07:53.973 --rc genhtml_legend=1 00:07:53.973 --rc geninfo_all_blocks=1 00:07:53.973 --rc geninfo_unexecuted_blocks=1 00:07:53.973 00:07:53.973 ' 00:07:53.973 13:21:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:53.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.973 --rc genhtml_branch_coverage=1 00:07:53.973 --rc genhtml_function_coverage=1 00:07:53.973 --rc genhtml_legend=1 00:07:53.973 --rc geninfo_all_blocks=1 00:07:53.973 --rc geninfo_unexecuted_blocks=1 00:07:53.973 00:07:53.973 ' 00:07:53.973 13:21:59 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:53.973 13:21:59 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=71566 00:07:53.973 13:21:59 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:53.973 13:21:59 -- accel/accel_rpc.sh@15 -- # waitforlisten 71566 00:07:53.973 13:21:59 -- common/autotest_common.sh@829 -- # '[' -z 71566 ']' 00:07:53.973 13:21:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.973 13:21:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:53.973 13:21:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.973 13:21:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:53.973 13:21:59 -- common/autotest_common.sh@10 -- # set +x 00:07:54.232 [2024-12-15 13:21:59.680024] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:54.232 [2024-12-15 13:21:59.680317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71566 ] 00:07:54.232 [2024-12-15 13:21:59.820688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.232 [2024-12-15 13:21:59.871278] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:54.232 [2024-12-15 13:21:59.871734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.232 13:21:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:54.232 13:21:59 -- common/autotest_common.sh@862 -- # return 0 00:07:54.232 13:21:59 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:54.232 13:21:59 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:54.232 13:21:59 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:54.232 13:21:59 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:54.232 13:21:59 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:54.232 13:21:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:54.232 13:21:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.232 13:21:59 -- common/autotest_common.sh@10 -- # set +x 00:07:54.490 ************************************ 00:07:54.490 START TEST accel_assign_opcode 00:07:54.490 ************************************ 00:07:54.490 13:21:59 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:54.490 13:21:59 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:54.490 13:21:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.490 13:21:59 -- common/autotest_common.sh@10 -- # set +x 00:07:54.490 [2024-12-15 13:21:59.940235] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:54.490 13:21:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.490 13:21:59 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:54.490 13:21:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.490 13:21:59 -- common/autotest_common.sh@10 -- # set +x 00:07:54.490 [2024-12-15 13:21:59.952234] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:54.490 13:21:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.490 13:21:59 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:54.490 13:21:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.490 13:21:59 -- common/autotest_common.sh@10 -- # set +x 00:07:54.490 13:22:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.490 13:22:00 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:54.490 13:22:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.490 13:22:00 -- common/autotest_common.sh@10 -- # set +x 00:07:54.490 13:22:00 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:54.490 13:22:00 -- accel/accel_rpc.sh@42 -- # grep software 00:07:54.749 13:22:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.749 software 00:07:54.749 ************************************ 00:07:54.749 END TEST accel_assign_opcode 00:07:54.749 ************************************ 00:07:54.749 00:07:54.749 real 0m0.292s 00:07:54.749 user 0m0.055s 00:07:54.749 sys 0m0.011s 00:07:54.749 13:22:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:54.749 13:22:00 -- common/autotest_common.sh@10 -- # set +x 00:07:54.749 13:22:00 -- accel/accel_rpc.sh@55 -- # killprocess 71566 00:07:54.749 13:22:00 -- common/autotest_common.sh@936 -- # '[' -z 71566 ']' 00:07:54.749 13:22:00 -- common/autotest_common.sh@940 -- # kill -0 71566 00:07:54.749 13:22:00 -- common/autotest_common.sh@941 -- # uname 00:07:54.749 13:22:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:54.749 13:22:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71566 00:07:54.749 killing process with pid 71566 00:07:54.749 13:22:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:54.749 13:22:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:54.749 13:22:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71566' 00:07:54.749 13:22:00 -- common/autotest_common.sh@955 -- # kill 71566 00:07:54.749 13:22:00 -- common/autotest_common.sh@960 -- # wait 71566 00:07:55.007 ************************************ 00:07:55.007 END TEST accel_rpc 00:07:55.007 ************************************ 00:07:55.007 00:07:55.007 real 0m1.204s 00:07:55.007 user 0m1.112s 00:07:55.007 sys 0m0.424s 00:07:55.007 13:22:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:55.007 13:22:00 -- common/autotest_common.sh@10 -- # set +x 00:07:55.007 13:22:00 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:55.007 13:22:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:55.007 13:22:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:55.007 13:22:00 -- common/autotest_common.sh@10 -- # set +x 00:07:55.007 ************************************ 00:07:55.007 START TEST app_cmdline 00:07:55.007 ************************************ 00:07:55.007 13:22:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:55.266 * Looking for test storage... 00:07:55.266 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:55.266 13:22:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:55.266 13:22:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:55.266 13:22:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:55.266 13:22:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:55.266 13:22:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:55.266 13:22:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:55.266 13:22:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:55.266 13:22:00 -- scripts/common.sh@335 -- # IFS=.-: 00:07:55.266 13:22:00 -- scripts/common.sh@335 -- # read -ra ver1 00:07:55.266 13:22:00 -- scripts/common.sh@336 -- # IFS=.-: 00:07:55.266 13:22:00 -- scripts/common.sh@336 -- # read -ra ver2 00:07:55.266 13:22:00 -- scripts/common.sh@337 -- # local 'op=<' 00:07:55.266 13:22:00 -- scripts/common.sh@339 -- # ver1_l=2 00:07:55.266 13:22:00 -- scripts/common.sh@340 -- # ver2_l=1 00:07:55.266 13:22:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:55.266 13:22:00 -- scripts/common.sh@343 -- # case "$op" in 00:07:55.266 13:22:00 -- scripts/common.sh@344 -- # : 1 00:07:55.266 13:22:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:55.266 13:22:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:55.266 13:22:00 -- scripts/common.sh@364 -- # decimal 1 00:07:55.266 13:22:00 -- scripts/common.sh@352 -- # local d=1 00:07:55.266 13:22:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:55.266 13:22:00 -- scripts/common.sh@354 -- # echo 1 00:07:55.266 13:22:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:55.266 13:22:00 -- scripts/common.sh@365 -- # decimal 2 00:07:55.266 13:22:00 -- scripts/common.sh@352 -- # local d=2 00:07:55.266 13:22:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:55.266 13:22:00 -- scripts/common.sh@354 -- # echo 2 00:07:55.266 13:22:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:55.266 13:22:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:55.266 13:22:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:55.266 13:22:00 -- scripts/common.sh@367 -- # return 0 00:07:55.266 13:22:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:55.266 13:22:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:55.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.266 --rc genhtml_branch_coverage=1 00:07:55.266 --rc genhtml_function_coverage=1 00:07:55.266 --rc genhtml_legend=1 00:07:55.266 --rc geninfo_all_blocks=1 00:07:55.266 --rc geninfo_unexecuted_blocks=1 00:07:55.266 00:07:55.266 ' 00:07:55.266 13:22:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:55.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.266 --rc genhtml_branch_coverage=1 00:07:55.266 --rc genhtml_function_coverage=1 00:07:55.266 --rc genhtml_legend=1 00:07:55.266 --rc geninfo_all_blocks=1 00:07:55.266 --rc geninfo_unexecuted_blocks=1 00:07:55.266 00:07:55.266 ' 00:07:55.266 13:22:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:55.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.266 --rc genhtml_branch_coverage=1 00:07:55.266 --rc genhtml_function_coverage=1 00:07:55.266 --rc genhtml_legend=1 00:07:55.266 --rc geninfo_all_blocks=1 00:07:55.266 --rc geninfo_unexecuted_blocks=1 00:07:55.266 00:07:55.266 ' 00:07:55.266 13:22:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:55.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.266 --rc genhtml_branch_coverage=1 00:07:55.266 --rc genhtml_function_coverage=1 00:07:55.266 --rc genhtml_legend=1 00:07:55.266 --rc geninfo_all_blocks=1 00:07:55.266 --rc geninfo_unexecuted_blocks=1 00:07:55.266 00:07:55.266 ' 00:07:55.266 13:22:00 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:55.266 13:22:00 -- app/cmdline.sh@17 -- # spdk_tgt_pid=71667 00:07:55.266 13:22:00 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:55.266 13:22:00 -- app/cmdline.sh@18 -- # waitforlisten 71667 00:07:55.266 13:22:00 -- common/autotest_common.sh@829 -- # '[' -z 71667 ']' 00:07:55.266 13:22:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.266 13:22:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:55.266 13:22:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.266 13:22:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:55.266 13:22:00 -- common/autotest_common.sh@10 -- # set +x 00:07:55.266 [2024-12-15 13:22:00.922717] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:55.266 [2024-12-15 13:22:00.923005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71667 ] 00:07:55.525 [2024-12-15 13:22:01.061546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.525 [2024-12-15 13:22:01.113480] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:55.525 [2024-12-15 13:22:01.113972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.459 13:22:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:56.459 13:22:01 -- common/autotest_common.sh@862 -- # return 0 00:07:56.459 13:22:01 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:56.459 { 00:07:56.459 "fields": { 00:07:56.459 "commit": "c13c99a5e", 00:07:56.459 "major": 24, 00:07:56.459 "minor": 1, 00:07:56.459 "patch": 1, 00:07:56.459 "suffix": "-pre" 00:07:56.459 }, 00:07:56.459 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e" 00:07:56.459 } 00:07:56.459 13:22:02 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:56.459 13:22:02 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:56.459 13:22:02 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:56.459 13:22:02 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:56.459 13:22:02 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:56.459 13:22:02 -- app/cmdline.sh@26 -- # sort 00:07:56.459 13:22:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.459 13:22:02 -- common/autotest_common.sh@10 -- # set +x 00:07:56.459 13:22:02 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:56.459 13:22:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.718 13:22:02 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:56.718 13:22:02 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:56.718 13:22:02 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:56.718 13:22:02 -- common/autotest_common.sh@650 -- # local es=0 00:07:56.718 13:22:02 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:56.718 13:22:02 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:56.718 13:22:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:56.718 13:22:02 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:56.718 13:22:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:56.718 13:22:02 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:56.718 13:22:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:56.718 13:22:02 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:56.718 13:22:02 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:56.718 13:22:02 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:56.978 2024/12/15 13:22:02 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:07:56.978 request: 00:07:56.978 { 00:07:56.978 "method": "env_dpdk_get_mem_stats", 00:07:56.978 "params": {} 00:07:56.978 } 00:07:56.978 Got JSON-RPC error response 00:07:56.978 GoRPCClient: error on JSON-RPC call 00:07:56.978 13:22:02 -- common/autotest_common.sh@653 -- # es=1 00:07:56.978 13:22:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:56.978 13:22:02 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:56.978 13:22:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:56.978 13:22:02 -- app/cmdline.sh@1 -- # killprocess 71667 00:07:56.978 13:22:02 -- common/autotest_common.sh@936 -- # '[' -z 71667 ']' 00:07:56.978 13:22:02 -- common/autotest_common.sh@940 -- # kill -0 71667 00:07:56.978 13:22:02 -- common/autotest_common.sh@941 -- # uname 00:07:56.978 13:22:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:56.978 13:22:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71667 00:07:56.978 13:22:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:56.978 13:22:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:56.978 killing process with pid 71667 00:07:56.978 13:22:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71667' 00:07:56.978 13:22:02 -- common/autotest_common.sh@955 -- # kill 71667 00:07:56.978 13:22:02 -- common/autotest_common.sh@960 -- # wait 71667 00:07:57.236 00:07:57.236 real 0m2.124s 00:07:57.236 user 0m2.643s 00:07:57.236 sys 0m0.473s 00:07:57.236 13:22:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:57.236 13:22:02 -- common/autotest_common.sh@10 -- # set +x 00:07:57.236 ************************************ 00:07:57.236 END TEST app_cmdline 00:07:57.236 ************************************ 00:07:57.236 13:22:02 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:57.236 13:22:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:57.236 13:22:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.236 13:22:02 -- common/autotest_common.sh@10 -- # set +x 00:07:57.236 ************************************ 00:07:57.236 START TEST version 00:07:57.236 ************************************ 00:07:57.236 13:22:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:57.494 * Looking for test storage... 00:07:57.495 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:57.495 13:22:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:57.495 13:22:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:57.495 13:22:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:57.495 13:22:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:57.495 13:22:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:57.495 13:22:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:57.495 13:22:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:57.495 13:22:03 -- scripts/common.sh@335 -- # IFS=.-: 00:07:57.495 13:22:03 -- scripts/common.sh@335 -- # read -ra ver1 00:07:57.495 13:22:03 -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.495 13:22:03 -- scripts/common.sh@336 -- # read -ra ver2 00:07:57.495 13:22:03 -- scripts/common.sh@337 -- # local 'op=<' 00:07:57.495 13:22:03 -- scripts/common.sh@339 -- # ver1_l=2 00:07:57.495 13:22:03 -- scripts/common.sh@340 -- # ver2_l=1 00:07:57.495 13:22:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:57.495 13:22:03 -- scripts/common.sh@343 -- # case "$op" in 00:07:57.495 13:22:03 -- scripts/common.sh@344 -- # : 1 00:07:57.495 13:22:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:57.495 13:22:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.495 13:22:03 -- scripts/common.sh@364 -- # decimal 1 00:07:57.495 13:22:03 -- scripts/common.sh@352 -- # local d=1 00:07:57.495 13:22:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.495 13:22:03 -- scripts/common.sh@354 -- # echo 1 00:07:57.495 13:22:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:57.495 13:22:03 -- scripts/common.sh@365 -- # decimal 2 00:07:57.495 13:22:03 -- scripts/common.sh@352 -- # local d=2 00:07:57.495 13:22:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.495 13:22:03 -- scripts/common.sh@354 -- # echo 2 00:07:57.495 13:22:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:57.495 13:22:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:57.495 13:22:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:57.495 13:22:03 -- scripts/common.sh@367 -- # return 0 00:07:57.495 13:22:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.495 13:22:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:57.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.495 --rc genhtml_branch_coverage=1 00:07:57.495 --rc genhtml_function_coverage=1 00:07:57.495 --rc genhtml_legend=1 00:07:57.495 --rc geninfo_all_blocks=1 00:07:57.495 --rc geninfo_unexecuted_blocks=1 00:07:57.495 00:07:57.495 ' 00:07:57.495 13:22:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:57.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.495 --rc genhtml_branch_coverage=1 00:07:57.495 --rc genhtml_function_coverage=1 00:07:57.495 --rc genhtml_legend=1 00:07:57.495 --rc geninfo_all_blocks=1 00:07:57.495 --rc geninfo_unexecuted_blocks=1 00:07:57.495 00:07:57.495 ' 00:07:57.495 13:22:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:57.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.495 --rc genhtml_branch_coverage=1 00:07:57.495 --rc genhtml_function_coverage=1 00:07:57.495 --rc genhtml_legend=1 00:07:57.495 --rc geninfo_all_blocks=1 00:07:57.495 --rc geninfo_unexecuted_blocks=1 00:07:57.495 00:07:57.495 ' 00:07:57.495 13:22:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:57.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.495 --rc genhtml_branch_coverage=1 00:07:57.495 --rc genhtml_function_coverage=1 00:07:57.495 --rc genhtml_legend=1 00:07:57.495 --rc geninfo_all_blocks=1 00:07:57.495 --rc geninfo_unexecuted_blocks=1 00:07:57.495 00:07:57.495 ' 00:07:57.495 13:22:03 -- app/version.sh@17 -- # get_header_version major 00:07:57.495 13:22:03 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:57.495 13:22:03 -- app/version.sh@14 -- # cut -f2 00:07:57.495 13:22:03 -- app/version.sh@14 -- # tr -d '"' 00:07:57.495 13:22:03 -- app/version.sh@17 -- # major=24 00:07:57.495 13:22:03 -- app/version.sh@18 -- # get_header_version minor 00:07:57.495 13:22:03 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:57.495 13:22:03 -- app/version.sh@14 -- # tr -d '"' 00:07:57.495 13:22:03 -- app/version.sh@14 -- # cut -f2 00:07:57.495 13:22:03 -- app/version.sh@18 -- # minor=1 00:07:57.495 13:22:03 -- app/version.sh@19 -- # get_header_version patch 00:07:57.495 13:22:03 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:57.495 13:22:03 -- app/version.sh@14 -- # cut -f2 00:07:57.495 13:22:03 -- app/version.sh@14 -- # tr -d '"' 00:07:57.495 13:22:03 -- app/version.sh@19 -- # patch=1 00:07:57.495 13:22:03 -- app/version.sh@20 -- # get_header_version suffix 00:07:57.495 13:22:03 -- app/version.sh@14 -- # cut -f2 00:07:57.495 13:22:03 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:57.495 13:22:03 -- app/version.sh@14 -- # tr -d '"' 00:07:57.495 13:22:03 -- app/version.sh@20 -- # suffix=-pre 00:07:57.495 13:22:03 -- app/version.sh@22 -- # version=24.1 00:07:57.495 13:22:03 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:57.495 13:22:03 -- app/version.sh@25 -- # version=24.1.1 00:07:57.495 13:22:03 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:57.495 13:22:03 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:57.495 13:22:03 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:57.495 13:22:03 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:57.495 13:22:03 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:57.495 ************************************ 00:07:57.495 END TEST version 00:07:57.495 ************************************ 00:07:57.495 00:07:57.495 real 0m0.244s 00:07:57.495 user 0m0.166s 00:07:57.495 sys 0m0.115s 00:07:57.495 13:22:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:57.495 13:22:03 -- common/autotest_common.sh@10 -- # set +x 00:07:57.495 13:22:03 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:57.495 13:22:03 -- spdk/autotest.sh@191 -- # uname -s 00:07:57.495 13:22:03 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:57.495 13:22:03 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:57.495 13:22:03 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:57.495 13:22:03 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:07:57.495 13:22:03 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:07:57.495 13:22:03 -- spdk/autotest.sh@255 -- # timing_exit lib 00:07:57.495 13:22:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:57.495 13:22:03 -- common/autotest_common.sh@10 -- # set +x 00:07:57.754 13:22:03 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:07:57.754 13:22:03 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:07:57.754 13:22:03 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:07:57.754 13:22:03 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:07:57.754 13:22:03 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:07:57.754 13:22:03 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:07:57.754 13:22:03 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:57.754 13:22:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:57.754 13:22:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.754 13:22:03 -- common/autotest_common.sh@10 -- # set +x 00:07:57.754 ************************************ 00:07:57.754 START TEST nvmf_tcp 00:07:57.754 ************************************ 00:07:57.754 13:22:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:57.754 * Looking for test storage... 00:07:57.754 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:57.754 13:22:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:57.754 13:22:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:57.754 13:22:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:57.754 13:22:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:57.754 13:22:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:57.754 13:22:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:57.754 13:22:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:57.754 13:22:03 -- scripts/common.sh@335 -- # IFS=.-: 00:07:57.754 13:22:03 -- scripts/common.sh@335 -- # read -ra ver1 00:07:57.754 13:22:03 -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.754 13:22:03 -- scripts/common.sh@336 -- # read -ra ver2 00:07:57.754 13:22:03 -- scripts/common.sh@337 -- # local 'op=<' 00:07:57.754 13:22:03 -- scripts/common.sh@339 -- # ver1_l=2 00:07:57.754 13:22:03 -- scripts/common.sh@340 -- # ver2_l=1 00:07:57.754 13:22:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:57.754 13:22:03 -- scripts/common.sh@343 -- # case "$op" in 00:07:57.754 13:22:03 -- scripts/common.sh@344 -- # : 1 00:07:57.754 13:22:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:57.754 13:22:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.754 13:22:03 -- scripts/common.sh@364 -- # decimal 1 00:07:57.754 13:22:03 -- scripts/common.sh@352 -- # local d=1 00:07:57.754 13:22:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.754 13:22:03 -- scripts/common.sh@354 -- # echo 1 00:07:57.754 13:22:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:57.754 13:22:03 -- scripts/common.sh@365 -- # decimal 2 00:07:57.754 13:22:03 -- scripts/common.sh@352 -- # local d=2 00:07:57.754 13:22:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.754 13:22:03 -- scripts/common.sh@354 -- # echo 2 00:07:57.754 13:22:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:57.754 13:22:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:57.754 13:22:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:57.754 13:22:03 -- scripts/common.sh@367 -- # return 0 00:07:57.754 13:22:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.754 13:22:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:57.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.754 --rc genhtml_branch_coverage=1 00:07:57.754 --rc genhtml_function_coverage=1 00:07:57.754 --rc genhtml_legend=1 00:07:57.754 --rc geninfo_all_blocks=1 00:07:57.754 --rc geninfo_unexecuted_blocks=1 00:07:57.754 00:07:57.754 ' 00:07:57.754 13:22:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:57.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.754 --rc genhtml_branch_coverage=1 00:07:57.754 --rc genhtml_function_coverage=1 00:07:57.754 --rc genhtml_legend=1 00:07:57.754 --rc geninfo_all_blocks=1 00:07:57.754 --rc geninfo_unexecuted_blocks=1 00:07:57.754 00:07:57.754 ' 00:07:57.754 13:22:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:57.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.754 --rc genhtml_branch_coverage=1 00:07:57.754 --rc genhtml_function_coverage=1 00:07:57.754 --rc genhtml_legend=1 00:07:57.754 --rc geninfo_all_blocks=1 00:07:57.754 --rc geninfo_unexecuted_blocks=1 00:07:57.754 00:07:57.754 ' 00:07:57.755 13:22:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:57.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.755 --rc genhtml_branch_coverage=1 00:07:57.755 --rc genhtml_function_coverage=1 00:07:57.755 --rc genhtml_legend=1 00:07:57.755 --rc geninfo_all_blocks=1 00:07:57.755 --rc geninfo_unexecuted_blocks=1 00:07:57.755 00:07:57.755 ' 00:07:57.755 13:22:03 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:57.755 13:22:03 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:57.755 13:22:03 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:57.755 13:22:03 -- nvmf/common.sh@7 -- # uname -s 00:07:57.755 13:22:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.755 13:22:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.755 13:22:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.755 13:22:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.755 13:22:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.755 13:22:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.755 13:22:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.755 13:22:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.755 13:22:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.755 13:22:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.755 13:22:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:07:57.755 13:22:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:07:57.755 13:22:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.755 13:22:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.755 13:22:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:57.755 13:22:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:57.755 13:22:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.755 13:22:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.755 13:22:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.755 13:22:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.755 13:22:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.755 13:22:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.755 13:22:03 -- paths/export.sh@5 -- # export PATH 00:07:57.755 13:22:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.755 13:22:03 -- nvmf/common.sh@46 -- # : 0 00:07:57.755 13:22:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:57.755 13:22:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:57.755 13:22:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:57.755 13:22:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.755 13:22:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.755 13:22:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:57.755 13:22:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:57.755 13:22:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:57.755 13:22:03 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:57.755 13:22:03 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:57.755 13:22:03 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:57.755 13:22:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:57.755 13:22:03 -- common/autotest_common.sh@10 -- # set +x 00:07:57.755 13:22:03 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:57.755 13:22:03 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:57.755 13:22:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:57.755 13:22:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.755 13:22:03 -- common/autotest_common.sh@10 -- # set +x 00:07:57.755 ************************************ 00:07:57.755 START TEST nvmf_example 00:07:57.755 ************************************ 00:07:57.755 13:22:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:58.014 * Looking for test storage... 00:07:58.014 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:58.014 13:22:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:58.014 13:22:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:58.014 13:22:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:58.014 13:22:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:58.014 13:22:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:58.014 13:22:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:58.014 13:22:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:58.014 13:22:03 -- scripts/common.sh@335 -- # IFS=.-: 00:07:58.014 13:22:03 -- scripts/common.sh@335 -- # read -ra ver1 00:07:58.014 13:22:03 -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.014 13:22:03 -- scripts/common.sh@336 -- # read -ra ver2 00:07:58.014 13:22:03 -- scripts/common.sh@337 -- # local 'op=<' 00:07:58.014 13:22:03 -- scripts/common.sh@339 -- # ver1_l=2 00:07:58.014 13:22:03 -- scripts/common.sh@340 -- # ver2_l=1 00:07:58.014 13:22:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:58.014 13:22:03 -- scripts/common.sh@343 -- # case "$op" in 00:07:58.014 13:22:03 -- scripts/common.sh@344 -- # : 1 00:07:58.014 13:22:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:58.014 13:22:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.014 13:22:03 -- scripts/common.sh@364 -- # decimal 1 00:07:58.014 13:22:03 -- scripts/common.sh@352 -- # local d=1 00:07:58.014 13:22:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.014 13:22:03 -- scripts/common.sh@354 -- # echo 1 00:07:58.014 13:22:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:58.014 13:22:03 -- scripts/common.sh@365 -- # decimal 2 00:07:58.014 13:22:03 -- scripts/common.sh@352 -- # local d=2 00:07:58.014 13:22:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.014 13:22:03 -- scripts/common.sh@354 -- # echo 2 00:07:58.014 13:22:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:58.014 13:22:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:58.014 13:22:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:58.014 13:22:03 -- scripts/common.sh@367 -- # return 0 00:07:58.014 13:22:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.014 13:22:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:58.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.014 --rc genhtml_branch_coverage=1 00:07:58.014 --rc genhtml_function_coverage=1 00:07:58.014 --rc genhtml_legend=1 00:07:58.014 --rc geninfo_all_blocks=1 00:07:58.014 --rc geninfo_unexecuted_blocks=1 00:07:58.014 00:07:58.014 ' 00:07:58.014 13:22:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:58.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.014 --rc genhtml_branch_coverage=1 00:07:58.014 --rc genhtml_function_coverage=1 00:07:58.014 --rc genhtml_legend=1 00:07:58.014 --rc geninfo_all_blocks=1 00:07:58.014 --rc geninfo_unexecuted_blocks=1 00:07:58.014 00:07:58.014 ' 00:07:58.014 13:22:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:58.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.014 --rc genhtml_branch_coverage=1 00:07:58.014 --rc genhtml_function_coverage=1 00:07:58.014 --rc genhtml_legend=1 00:07:58.014 --rc geninfo_all_blocks=1 00:07:58.014 --rc geninfo_unexecuted_blocks=1 00:07:58.014 00:07:58.014 ' 00:07:58.014 13:22:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:58.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.014 --rc genhtml_branch_coverage=1 00:07:58.014 --rc genhtml_function_coverage=1 00:07:58.014 --rc genhtml_legend=1 00:07:58.014 --rc geninfo_all_blocks=1 00:07:58.014 --rc geninfo_unexecuted_blocks=1 00:07:58.014 00:07:58.014 ' 00:07:58.014 13:22:03 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:58.014 13:22:03 -- nvmf/common.sh@7 -- # uname -s 00:07:58.014 13:22:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.014 13:22:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.014 13:22:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.014 13:22:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.014 13:22:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.014 13:22:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.014 13:22:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.014 13:22:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.014 13:22:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.014 13:22:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.015 13:22:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:07:58.015 13:22:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:07:58.015 13:22:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.015 13:22:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.015 13:22:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:58.015 13:22:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:58.015 13:22:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.015 13:22:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.015 13:22:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.015 13:22:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.015 13:22:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.015 13:22:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.015 13:22:03 -- paths/export.sh@5 -- # export PATH 00:07:58.015 13:22:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.015 13:22:03 -- nvmf/common.sh@46 -- # : 0 00:07:58.015 13:22:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:58.015 13:22:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:58.015 13:22:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:58.015 13:22:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.015 13:22:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.015 13:22:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:58.015 13:22:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:58.015 13:22:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:58.015 13:22:03 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:58.015 13:22:03 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:58.015 13:22:03 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:58.015 13:22:03 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:58.015 13:22:03 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:58.015 13:22:03 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:58.015 13:22:03 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:58.015 13:22:03 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:58.015 13:22:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:58.015 13:22:03 -- common/autotest_common.sh@10 -- # set +x 00:07:58.015 13:22:03 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:58.015 13:22:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:58.015 13:22:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.015 13:22:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:58.015 13:22:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:58.015 13:22:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:58.015 13:22:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.015 13:22:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:58.015 13:22:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.015 13:22:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:58.015 13:22:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:58.015 13:22:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:58.015 13:22:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:58.015 13:22:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:58.015 13:22:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:58.015 13:22:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:58.015 13:22:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:58.015 13:22:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:58.015 13:22:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:58.015 13:22:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:58.015 13:22:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:58.015 13:22:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:58.015 13:22:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:58.015 13:22:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:58.015 13:22:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:58.015 13:22:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:58.015 13:22:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:58.015 13:22:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:58.015 Cannot find device "nvmf_init_br" 00:07:58.015 13:22:03 -- nvmf/common.sh@153 -- # true 00:07:58.015 13:22:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:58.015 Cannot find device "nvmf_tgt_br" 00:07:58.015 13:22:03 -- nvmf/common.sh@154 -- # true 00:07:58.015 13:22:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:58.015 Cannot find device "nvmf_tgt_br2" 00:07:58.015 13:22:03 -- nvmf/common.sh@155 -- # true 00:07:58.015 13:22:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:58.015 Cannot find device "nvmf_init_br" 00:07:58.015 13:22:03 -- nvmf/common.sh@156 -- # true 00:07:58.015 13:22:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:58.015 Cannot find device "nvmf_tgt_br" 00:07:58.015 13:22:03 -- nvmf/common.sh@157 -- # true 00:07:58.015 13:22:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:58.274 Cannot find device "nvmf_tgt_br2" 00:07:58.274 13:22:03 -- nvmf/common.sh@158 -- # true 00:07:58.274 13:22:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:58.274 Cannot find device "nvmf_br" 00:07:58.274 13:22:03 -- nvmf/common.sh@159 -- # true 00:07:58.274 13:22:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:58.274 Cannot find device "nvmf_init_if" 00:07:58.274 13:22:03 -- nvmf/common.sh@160 -- # true 00:07:58.274 13:22:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:58.274 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:58.274 13:22:03 -- nvmf/common.sh@161 -- # true 00:07:58.274 13:22:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:58.274 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:58.274 13:22:03 -- nvmf/common.sh@162 -- # true 00:07:58.274 13:22:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:58.274 13:22:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:58.274 13:22:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:58.274 13:22:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:58.274 13:22:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:58.274 13:22:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:58.274 13:22:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:58.274 13:22:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:58.274 13:22:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:58.274 13:22:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:58.274 13:22:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:58.274 13:22:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:58.274 13:22:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:58.274 13:22:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:58.274 13:22:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:58.274 13:22:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:58.274 13:22:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:58.274 13:22:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:58.274 13:22:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:58.274 13:22:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:58.274 13:22:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:58.533 13:22:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:58.533 13:22:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:58.533 13:22:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:58.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:58.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:07:58.533 00:07:58.533 --- 10.0.0.2 ping statistics --- 00:07:58.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.533 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:07:58.533 13:22:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:58.533 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:58.533 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:07:58.533 00:07:58.533 --- 10.0.0.3 ping statistics --- 00:07:58.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.533 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:07:58.533 13:22:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:58.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:58.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:07:58.533 00:07:58.533 --- 10.0.0.1 ping statistics --- 00:07:58.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.533 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:07:58.533 13:22:04 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:58.533 13:22:04 -- nvmf/common.sh@421 -- # return 0 00:07:58.533 13:22:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:58.533 13:22:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:58.533 13:22:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:58.533 13:22:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:58.533 13:22:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:58.533 13:22:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:58.533 13:22:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:58.533 13:22:04 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:58.533 13:22:04 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:58.533 13:22:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:58.533 13:22:04 -- common/autotest_common.sh@10 -- # set +x 00:07:58.533 13:22:04 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:58.533 13:22:04 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:58.533 13:22:04 -- target/nvmf_example.sh@34 -- # nvmfpid=72039 00:07:58.533 13:22:04 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:58.533 13:22:04 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:58.533 13:22:04 -- target/nvmf_example.sh@36 -- # waitforlisten 72039 00:07:58.533 13:22:04 -- common/autotest_common.sh@829 -- # '[' -z 72039 ']' 00:07:58.533 13:22:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.533 13:22:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:58.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.533 13:22:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.533 13:22:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:58.533 13:22:04 -- common/autotest_common.sh@10 -- # set +x 00:07:59.469 13:22:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:59.469 13:22:05 -- common/autotest_common.sh@862 -- # return 0 00:07:59.469 13:22:05 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:59.469 13:22:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:59.469 13:22:05 -- common/autotest_common.sh@10 -- # set +x 00:07:59.469 13:22:05 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:59.469 13:22:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.469 13:22:05 -- common/autotest_common.sh@10 -- # set +x 00:07:59.727 13:22:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.727 13:22:05 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:59.727 13:22:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.727 13:22:05 -- common/autotest_common.sh@10 -- # set +x 00:07:59.727 13:22:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.727 13:22:05 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:59.727 13:22:05 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:59.727 13:22:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.727 13:22:05 -- common/autotest_common.sh@10 -- # set +x 00:07:59.727 13:22:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.727 13:22:05 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:59.727 13:22:05 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:59.727 13:22:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.727 13:22:05 -- common/autotest_common.sh@10 -- # set +x 00:07:59.727 13:22:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.727 13:22:05 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:59.727 13:22:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.727 13:22:05 -- common/autotest_common.sh@10 -- # set +x 00:07:59.727 13:22:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.727 13:22:05 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:07:59.727 13:22:05 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:11.973 Initializing NVMe Controllers 00:08:11.973 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:11.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:11.973 Initialization complete. Launching workers. 00:08:11.973 ======================================================== 00:08:11.973 Latency(us) 00:08:11.973 Device Information : IOPS MiB/s Average min max 00:08:11.973 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16875.95 65.92 3792.06 559.34 20190.75 00:08:11.973 ======================================================== 00:08:11.973 Total : 16875.95 65.92 3792.06 559.34 20190.75 00:08:11.973 00:08:11.973 13:22:15 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:11.973 13:22:15 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:11.973 13:22:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:11.973 13:22:15 -- nvmf/common.sh@116 -- # sync 00:08:11.973 13:22:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:11.973 13:22:15 -- nvmf/common.sh@119 -- # set +e 00:08:11.973 13:22:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:11.973 13:22:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:11.973 rmmod nvme_tcp 00:08:11.973 rmmod nvme_fabrics 00:08:11.973 rmmod nvme_keyring 00:08:11.973 13:22:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:11.973 13:22:15 -- nvmf/common.sh@123 -- # set -e 00:08:11.973 13:22:15 -- nvmf/common.sh@124 -- # return 0 00:08:11.973 13:22:15 -- nvmf/common.sh@477 -- # '[' -n 72039 ']' 00:08:11.973 13:22:15 -- nvmf/common.sh@478 -- # killprocess 72039 00:08:11.973 13:22:15 -- common/autotest_common.sh@936 -- # '[' -z 72039 ']' 00:08:11.973 13:22:15 -- common/autotest_common.sh@940 -- # kill -0 72039 00:08:11.973 13:22:15 -- common/autotest_common.sh@941 -- # uname 00:08:11.973 13:22:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:11.973 13:22:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72039 00:08:11.973 13:22:15 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:08:11.973 13:22:15 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:08:11.973 13:22:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72039' 00:08:11.973 killing process with pid 72039 00:08:11.973 13:22:15 -- common/autotest_common.sh@955 -- # kill 72039 00:08:11.973 13:22:15 -- common/autotest_common.sh@960 -- # wait 72039 00:08:11.973 nvmf threads initialize successfully 00:08:11.973 bdev subsystem init successfully 00:08:11.973 created a nvmf target service 00:08:11.973 create targets's poll groups done 00:08:11.973 all subsystems of target started 00:08:11.973 nvmf target is running 00:08:11.973 all subsystems of target stopped 00:08:11.973 destroy targets's poll groups done 00:08:11.973 destroyed the nvmf target service 00:08:11.973 bdev subsystem finish successfully 00:08:11.973 nvmf threads destroy successfully 00:08:11.973 13:22:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:11.973 13:22:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:11.973 13:22:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:11.973 13:22:15 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:11.973 13:22:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:11.973 13:22:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.974 13:22:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.974 13:22:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.974 13:22:15 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:11.974 13:22:15 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:11.974 13:22:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:11.974 13:22:15 -- common/autotest_common.sh@10 -- # set +x 00:08:11.974 00:08:11.974 real 0m12.424s 00:08:11.974 user 0m44.822s 00:08:11.974 sys 0m1.857s 00:08:11.974 13:22:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:11.974 13:22:15 -- common/autotest_common.sh@10 -- # set +x 00:08:11.974 ************************************ 00:08:11.974 END TEST nvmf_example 00:08:11.974 ************************************ 00:08:11.974 13:22:15 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:11.974 13:22:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:11.974 13:22:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:11.974 13:22:15 -- common/autotest_common.sh@10 -- # set +x 00:08:11.974 ************************************ 00:08:11.974 START TEST nvmf_filesystem 00:08:11.974 ************************************ 00:08:11.974 13:22:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:11.974 * Looking for test storage... 00:08:11.974 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:11.974 13:22:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:11.974 13:22:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:11.974 13:22:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:11.974 13:22:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:11.974 13:22:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:11.974 13:22:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:11.974 13:22:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:11.974 13:22:16 -- scripts/common.sh@335 -- # IFS=.-: 00:08:11.974 13:22:16 -- scripts/common.sh@335 -- # read -ra ver1 00:08:11.974 13:22:16 -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.974 13:22:16 -- scripts/common.sh@336 -- # read -ra ver2 00:08:11.974 13:22:16 -- scripts/common.sh@337 -- # local 'op=<' 00:08:11.974 13:22:16 -- scripts/common.sh@339 -- # ver1_l=2 00:08:11.974 13:22:16 -- scripts/common.sh@340 -- # ver2_l=1 00:08:11.974 13:22:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:11.974 13:22:16 -- scripts/common.sh@343 -- # case "$op" in 00:08:11.974 13:22:16 -- scripts/common.sh@344 -- # : 1 00:08:11.974 13:22:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:11.974 13:22:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.974 13:22:16 -- scripts/common.sh@364 -- # decimal 1 00:08:11.974 13:22:16 -- scripts/common.sh@352 -- # local d=1 00:08:11.974 13:22:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.974 13:22:16 -- scripts/common.sh@354 -- # echo 1 00:08:11.974 13:22:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:11.974 13:22:16 -- scripts/common.sh@365 -- # decimal 2 00:08:11.974 13:22:16 -- scripts/common.sh@352 -- # local d=2 00:08:11.974 13:22:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.974 13:22:16 -- scripts/common.sh@354 -- # echo 2 00:08:11.974 13:22:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:11.974 13:22:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:11.974 13:22:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:11.974 13:22:16 -- scripts/common.sh@367 -- # return 0 00:08:11.974 13:22:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.974 13:22:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:11.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.974 --rc genhtml_branch_coverage=1 00:08:11.974 --rc genhtml_function_coverage=1 00:08:11.974 --rc genhtml_legend=1 00:08:11.974 --rc geninfo_all_blocks=1 00:08:11.974 --rc geninfo_unexecuted_blocks=1 00:08:11.974 00:08:11.974 ' 00:08:11.974 13:22:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:11.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.974 --rc genhtml_branch_coverage=1 00:08:11.974 --rc genhtml_function_coverage=1 00:08:11.974 --rc genhtml_legend=1 00:08:11.974 --rc geninfo_all_blocks=1 00:08:11.974 --rc geninfo_unexecuted_blocks=1 00:08:11.974 00:08:11.974 ' 00:08:11.974 13:22:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:11.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.974 --rc genhtml_branch_coverage=1 00:08:11.974 --rc genhtml_function_coverage=1 00:08:11.974 --rc genhtml_legend=1 00:08:11.974 --rc geninfo_all_blocks=1 00:08:11.974 --rc geninfo_unexecuted_blocks=1 00:08:11.974 00:08:11.974 ' 00:08:11.974 13:22:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:11.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.974 --rc genhtml_branch_coverage=1 00:08:11.974 --rc genhtml_function_coverage=1 00:08:11.974 --rc genhtml_legend=1 00:08:11.974 --rc geninfo_all_blocks=1 00:08:11.974 --rc geninfo_unexecuted_blocks=1 00:08:11.974 00:08:11.974 ' 00:08:11.974 13:22:16 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:08:11.974 13:22:16 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:11.974 13:22:16 -- common/autotest_common.sh@34 -- # set -e 00:08:11.974 13:22:16 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:11.974 13:22:16 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:11.974 13:22:16 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:11.974 13:22:16 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:11.974 13:22:16 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:11.974 13:22:16 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:11.974 13:22:16 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:11.974 13:22:16 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:11.974 13:22:16 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:08:11.974 13:22:16 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:11.974 13:22:16 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:11.974 13:22:16 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:11.974 13:22:16 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:11.974 13:22:16 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:11.974 13:22:16 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:11.974 13:22:16 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:11.974 13:22:16 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:11.974 13:22:16 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:11.974 13:22:16 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:11.974 13:22:16 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:11.974 13:22:16 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:11.974 13:22:16 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:11.974 13:22:16 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:11.974 13:22:16 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:11.974 13:22:16 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:11.974 13:22:16 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:11.974 13:22:16 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:11.974 13:22:16 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:11.974 13:22:16 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:11.974 13:22:16 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:11.974 13:22:16 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:11.974 13:22:16 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:11.974 13:22:16 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:11.974 13:22:16 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:11.974 13:22:16 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:11.974 13:22:16 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:11.974 13:22:16 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:11.974 13:22:16 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:11.974 13:22:16 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:11.974 13:22:16 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:08:11.974 13:22:16 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:11.974 13:22:16 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:11.974 13:22:16 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:11.974 13:22:16 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:11.974 13:22:16 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:08:11.974 13:22:16 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:11.974 13:22:16 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:11.974 13:22:16 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:11.974 13:22:16 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:11.974 13:22:16 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:11.974 13:22:16 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:11.974 13:22:16 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:11.974 13:22:16 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:11.974 13:22:16 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:11.974 13:22:16 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:08:11.974 13:22:16 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:11.974 13:22:16 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:11.974 13:22:16 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:11.974 13:22:16 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:11.974 13:22:16 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:11.974 13:22:16 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:11.974 13:22:16 -- common/build_config.sh@58 -- # CONFIG_GOLANG=y 00:08:11.974 13:22:16 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:11.974 13:22:16 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:11.974 13:22:16 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:11.974 13:22:16 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:11.974 13:22:16 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:11.974 13:22:16 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:11.974 13:22:16 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:11.974 13:22:16 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:11.974 13:22:16 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:11.974 13:22:16 -- common/build_config.sh@68 -- # CONFIG_AVAHI=y 00:08:11.974 13:22:16 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:11.974 13:22:16 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:11.974 13:22:16 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:11.975 13:22:16 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:11.975 13:22:16 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:11.975 13:22:16 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:11.975 13:22:16 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:11.975 13:22:16 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:11.975 13:22:16 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:11.975 13:22:16 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:11.975 13:22:16 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:11.975 13:22:16 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:11.975 13:22:16 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:11.975 13:22:16 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:08:11.975 13:22:16 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:08:11.975 13:22:16 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:08:11.975 13:22:16 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:08:11.975 13:22:16 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:08:11.975 13:22:16 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:08:11.975 13:22:16 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:11.975 13:22:16 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:11.975 13:22:16 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:11.975 13:22:16 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:11.975 13:22:16 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:11.975 13:22:16 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:11.975 13:22:16 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:08:11.975 13:22:16 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:11.975 #define SPDK_CONFIG_H 00:08:11.975 #define SPDK_CONFIG_APPS 1 00:08:11.975 #define SPDK_CONFIG_ARCH native 00:08:11.975 #undef SPDK_CONFIG_ASAN 00:08:11.975 #define SPDK_CONFIG_AVAHI 1 00:08:11.975 #undef SPDK_CONFIG_CET 00:08:11.975 #define SPDK_CONFIG_COVERAGE 1 00:08:11.975 #define SPDK_CONFIG_CROSS_PREFIX 00:08:11.975 #undef SPDK_CONFIG_CRYPTO 00:08:11.975 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:11.975 #undef SPDK_CONFIG_CUSTOMOCF 00:08:11.975 #undef SPDK_CONFIG_DAOS 00:08:11.975 #define SPDK_CONFIG_DAOS_DIR 00:08:11.975 #define SPDK_CONFIG_DEBUG 1 00:08:11.975 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:11.975 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:08:11.975 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:08:11.975 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:08:11.975 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:11.975 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:11.975 #define SPDK_CONFIG_EXAMPLES 1 00:08:11.975 #undef SPDK_CONFIG_FC 00:08:11.975 #define SPDK_CONFIG_FC_PATH 00:08:11.975 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:11.975 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:11.975 #undef SPDK_CONFIG_FUSE 00:08:11.975 #undef SPDK_CONFIG_FUZZER 00:08:11.975 #define SPDK_CONFIG_FUZZER_LIB 00:08:11.975 #define SPDK_CONFIG_GOLANG 1 00:08:11.975 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:11.975 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:11.975 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:11.975 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:11.975 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:11.975 #define SPDK_CONFIG_IDXD 1 00:08:11.975 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:11.975 #undef SPDK_CONFIG_IPSEC_MB 00:08:11.975 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:11.975 #define SPDK_CONFIG_ISAL 1 00:08:11.975 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:11.975 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:11.975 #define SPDK_CONFIG_LIBDIR 00:08:11.975 #undef SPDK_CONFIG_LTO 00:08:11.975 #define SPDK_CONFIG_MAX_LCORES 00:08:11.975 #define SPDK_CONFIG_NVME_CUSE 1 00:08:11.975 #undef SPDK_CONFIG_OCF 00:08:11.975 #define SPDK_CONFIG_OCF_PATH 00:08:11.975 #define SPDK_CONFIG_OPENSSL_PATH 00:08:11.975 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:11.975 #undef SPDK_CONFIG_PGO_USE 00:08:11.975 #define SPDK_CONFIG_PREFIX /usr/local 00:08:11.975 #undef SPDK_CONFIG_RAID5F 00:08:11.975 #undef SPDK_CONFIG_RBD 00:08:11.975 #define SPDK_CONFIG_RDMA 1 00:08:11.975 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:11.975 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:11.975 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:11.975 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:11.975 #define SPDK_CONFIG_SHARED 1 00:08:11.975 #undef SPDK_CONFIG_SMA 00:08:11.975 #define SPDK_CONFIG_TESTS 1 00:08:11.975 #undef SPDK_CONFIG_TSAN 00:08:11.975 #define SPDK_CONFIG_UBLK 1 00:08:11.975 #define SPDK_CONFIG_UBSAN 1 00:08:11.975 #undef SPDK_CONFIG_UNIT_TESTS 00:08:11.975 #undef SPDK_CONFIG_URING 00:08:11.975 #define SPDK_CONFIG_URING_PATH 00:08:11.975 #undef SPDK_CONFIG_URING_ZNS 00:08:11.975 #define SPDK_CONFIG_USDT 1 00:08:11.975 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:11.975 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:11.975 #undef SPDK_CONFIG_VFIO_USER 00:08:11.975 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:11.975 #define SPDK_CONFIG_VHOST 1 00:08:11.975 #define SPDK_CONFIG_VIRTIO 1 00:08:11.975 #undef SPDK_CONFIG_VTUNE 00:08:11.975 #define SPDK_CONFIG_VTUNE_DIR 00:08:11.975 #define SPDK_CONFIG_WERROR 1 00:08:11.975 #define SPDK_CONFIG_WPDK_DIR 00:08:11.975 #undef SPDK_CONFIG_XNVME 00:08:11.975 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:11.975 13:22:16 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:11.975 13:22:16 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:11.975 13:22:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.975 13:22:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.975 13:22:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.975 13:22:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.975 13:22:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.975 13:22:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.975 13:22:16 -- paths/export.sh@5 -- # export PATH 00:08:11.975 13:22:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.975 13:22:16 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:11.975 13:22:16 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:11.975 13:22:16 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:11.975 13:22:16 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:11.975 13:22:16 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:08:11.975 13:22:16 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:08:11.975 13:22:16 -- pm/common@16 -- # TEST_TAG=N/A 00:08:11.975 13:22:16 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:08:11.975 13:22:16 -- common/autotest_common.sh@52 -- # : 1 00:08:11.975 13:22:16 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:11.975 13:22:16 -- common/autotest_common.sh@56 -- # : 0 00:08:11.975 13:22:16 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:11.975 13:22:16 -- common/autotest_common.sh@58 -- # : 0 00:08:11.975 13:22:16 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:11.975 13:22:16 -- common/autotest_common.sh@60 -- # : 1 00:08:11.975 13:22:16 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:11.975 13:22:16 -- common/autotest_common.sh@62 -- # : 0 00:08:11.975 13:22:16 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:11.975 13:22:16 -- common/autotest_common.sh@64 -- # : 00:08:11.975 13:22:16 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:11.975 13:22:16 -- common/autotest_common.sh@66 -- # : 0 00:08:11.975 13:22:16 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:11.975 13:22:16 -- common/autotest_common.sh@68 -- # : 0 00:08:11.975 13:22:16 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:11.975 13:22:16 -- common/autotest_common.sh@70 -- # : 0 00:08:11.975 13:22:16 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:11.975 13:22:16 -- common/autotest_common.sh@72 -- # : 0 00:08:11.975 13:22:16 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:11.975 13:22:16 -- common/autotest_common.sh@74 -- # : 0 00:08:11.975 13:22:16 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:11.975 13:22:16 -- common/autotest_common.sh@76 -- # : 0 00:08:11.975 13:22:16 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:11.975 13:22:16 -- common/autotest_common.sh@78 -- # : 0 00:08:11.975 13:22:16 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:11.975 13:22:16 -- common/autotest_common.sh@80 -- # : 0 00:08:11.975 13:22:16 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:11.975 13:22:16 -- common/autotest_common.sh@82 -- # : 0 00:08:11.975 13:22:16 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:11.975 13:22:16 -- common/autotest_common.sh@84 -- # : 0 00:08:11.975 13:22:16 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:11.975 13:22:16 -- common/autotest_common.sh@86 -- # : 1 00:08:11.975 13:22:16 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:11.976 13:22:16 -- common/autotest_common.sh@88 -- # : 0 00:08:11.976 13:22:16 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:11.976 13:22:16 -- common/autotest_common.sh@90 -- # : 0 00:08:11.976 13:22:16 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:11.976 13:22:16 -- common/autotest_common.sh@92 -- # : 0 00:08:11.976 13:22:16 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:11.976 13:22:16 -- common/autotest_common.sh@94 -- # : 0 00:08:11.976 13:22:16 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:11.976 13:22:16 -- common/autotest_common.sh@96 -- # : tcp 00:08:11.976 13:22:16 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:11.976 13:22:16 -- common/autotest_common.sh@98 -- # : 0 00:08:11.976 13:22:16 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:11.976 13:22:16 -- common/autotest_common.sh@100 -- # : 0 00:08:11.976 13:22:16 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:11.976 13:22:16 -- common/autotest_common.sh@102 -- # : 0 00:08:11.976 13:22:16 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:11.976 13:22:16 -- common/autotest_common.sh@104 -- # : 0 00:08:11.976 13:22:16 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:11.976 13:22:16 -- common/autotest_common.sh@106 -- # : 0 00:08:11.976 13:22:16 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:11.976 13:22:16 -- common/autotest_common.sh@108 -- # : 0 00:08:11.976 13:22:16 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:11.976 13:22:16 -- common/autotest_common.sh@110 -- # : 0 00:08:11.976 13:22:16 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:11.976 13:22:16 -- common/autotest_common.sh@112 -- # : 0 00:08:11.976 13:22:16 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:11.976 13:22:16 -- common/autotest_common.sh@114 -- # : 0 00:08:11.976 13:22:16 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:11.976 13:22:16 -- common/autotest_common.sh@116 -- # : 1 00:08:11.976 13:22:16 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:11.976 13:22:16 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:08:11.976 13:22:16 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:11.976 13:22:16 -- common/autotest_common.sh@120 -- # : 0 00:08:11.976 13:22:16 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:11.976 13:22:16 -- common/autotest_common.sh@122 -- # : 0 00:08:11.976 13:22:16 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:11.976 13:22:16 -- common/autotest_common.sh@124 -- # : 0 00:08:11.976 13:22:16 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:11.976 13:22:16 -- common/autotest_common.sh@126 -- # : 0 00:08:11.976 13:22:16 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:11.976 13:22:16 -- common/autotest_common.sh@128 -- # : 0 00:08:11.976 13:22:16 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:11.976 13:22:16 -- common/autotest_common.sh@130 -- # : 0 00:08:11.976 13:22:16 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:11.976 13:22:16 -- common/autotest_common.sh@132 -- # : v23.11 00:08:11.976 13:22:16 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:11.976 13:22:16 -- common/autotest_common.sh@134 -- # : true 00:08:11.976 13:22:16 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:11.976 13:22:16 -- common/autotest_common.sh@136 -- # : 0 00:08:11.976 13:22:16 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:11.976 13:22:16 -- common/autotest_common.sh@138 -- # : 0 00:08:11.976 13:22:16 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:11.976 13:22:16 -- common/autotest_common.sh@140 -- # : 1 00:08:11.976 13:22:16 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:11.976 13:22:16 -- common/autotest_common.sh@142 -- # : 0 00:08:11.976 13:22:16 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:11.976 13:22:16 -- common/autotest_common.sh@144 -- # : 0 00:08:11.976 13:22:16 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:11.976 13:22:16 -- common/autotest_common.sh@146 -- # : 0 00:08:11.976 13:22:16 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:11.976 13:22:16 -- common/autotest_common.sh@148 -- # : 00:08:11.976 13:22:16 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:11.976 13:22:16 -- common/autotest_common.sh@150 -- # : 0 00:08:11.976 13:22:16 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:11.976 13:22:16 -- common/autotest_common.sh@152 -- # : 0 00:08:11.976 13:22:16 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:11.976 13:22:16 -- common/autotest_common.sh@154 -- # : 0 00:08:11.976 13:22:16 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:11.976 13:22:16 -- common/autotest_common.sh@156 -- # : 0 00:08:11.976 13:22:16 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:11.976 13:22:16 -- common/autotest_common.sh@158 -- # : 0 00:08:11.976 13:22:16 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:11.976 13:22:16 -- common/autotest_common.sh@160 -- # : 0 00:08:11.976 13:22:16 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:11.976 13:22:16 -- common/autotest_common.sh@163 -- # : 00:08:11.976 13:22:16 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:11.976 13:22:16 -- common/autotest_common.sh@165 -- # : 1 00:08:11.976 13:22:16 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:11.976 13:22:16 -- common/autotest_common.sh@167 -- # : 1 00:08:11.976 13:22:16 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:11.976 13:22:16 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:11.976 13:22:16 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:11.976 13:22:16 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:11.976 13:22:16 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:11.976 13:22:16 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:11.976 13:22:16 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:11.976 13:22:16 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:11.976 13:22:16 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:11.976 13:22:16 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:11.976 13:22:16 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:11.976 13:22:16 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:11.976 13:22:16 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:11.976 13:22:16 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:11.976 13:22:16 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:11.976 13:22:16 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:11.976 13:22:16 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:11.976 13:22:16 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:11.976 13:22:16 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:11.976 13:22:16 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:11.976 13:22:16 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:11.976 13:22:16 -- common/autotest_common.sh@196 -- # cat 00:08:11.976 13:22:16 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:11.976 13:22:16 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:11.976 13:22:16 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:11.976 13:22:16 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:11.976 13:22:16 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:11.976 13:22:16 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:11.976 13:22:16 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:11.976 13:22:16 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:11.976 13:22:16 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:11.976 13:22:16 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:11.976 13:22:16 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:11.976 13:22:16 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:11.976 13:22:16 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:11.976 13:22:16 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:11.976 13:22:16 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:11.976 13:22:16 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:11.976 13:22:16 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:11.976 13:22:16 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:11.976 13:22:16 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:11.976 13:22:16 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:08:11.976 13:22:16 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:08:11.976 13:22:16 -- common/autotest_common.sh@249 -- # _LCOV= 00:08:11.976 13:22:16 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:08:11.976 13:22:16 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:08:11.977 13:22:16 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:11.977 13:22:16 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:08:11.977 13:22:16 -- common/autotest_common.sh@255 -- # lcov_opt= 00:08:11.977 13:22:16 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:08:11.977 13:22:16 -- common/autotest_common.sh@259 -- # export valgrind= 00:08:11.977 13:22:16 -- common/autotest_common.sh@259 -- # valgrind= 00:08:11.977 13:22:16 -- common/autotest_common.sh@265 -- # uname -s 00:08:11.977 13:22:16 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:08:11.977 13:22:16 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:08:11.977 13:22:16 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:08:11.977 13:22:16 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:08:11.977 13:22:16 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:11.977 13:22:16 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:11.977 13:22:16 -- common/autotest_common.sh@275 -- # MAKE=make 00:08:11.977 13:22:16 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:08:11.977 13:22:16 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:08:11.977 13:22:16 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:08:11.977 13:22:16 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:08:11.977 13:22:16 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:08:11.977 13:22:16 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:08:11.977 13:22:16 -- common/autotest_common.sh@301 -- # for i in "$@" 00:08:11.977 13:22:16 -- common/autotest_common.sh@302 -- # case "$i" in 00:08:11.977 13:22:16 -- common/autotest_common.sh@307 -- # TEST_TRANSPORT=tcp 00:08:11.977 13:22:16 -- common/autotest_common.sh@319 -- # [[ -z 72294 ]] 00:08:11.977 13:22:16 -- common/autotest_common.sh@319 -- # kill -0 72294 00:08:11.977 13:22:16 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:08:11.977 13:22:16 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:08:11.977 13:22:16 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:08:11.977 13:22:16 -- common/autotest_common.sh@332 -- # local mount target_dir 00:08:11.977 13:22:16 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:08:11.977 13:22:16 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:08:11.977 13:22:16 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:08:11.977 13:22:16 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:08:11.977 13:22:16 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.Ll9EPU 00:08:11.977 13:22:16 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:11.977 13:22:16 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:08:11.977 13:22:16 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:08:11.977 13:22:16 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.Ll9EPU/tests/target /tmp/spdk.Ll9EPU 00:08:11.977 13:22:16 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:08:11.977 13:22:16 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:11.977 13:22:16 -- common/autotest_common.sh@328 -- # df -T 00:08:11.977 13:22:16 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:08:11.977 13:22:16 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:08:11.977 13:22:16 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:08:11.977 13:22:16 -- common/autotest_common.sh@363 -- # avails["$mount"]=13293776896 00:08:11.977 13:22:16 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:08:11.977 13:22:16 -- common/autotest_common.sh@364 -- # uses["$mount"]=6289739776 00:08:11.977 13:22:16 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:11.977 13:22:16 -- common/autotest_common.sh@362 -- # mounts["$mount"]=devtmpfs 00:08:11.977 13:22:16 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:08:11.977 13:22:16 -- common/autotest_common.sh@363 -- # avails["$mount"]=4194304 00:08:11.977 13:22:16 -- common/autotest_common.sh@363 -- # sizes["$mount"]=4194304 00:08:11.977 13:22:16 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:08:11.977 13:22:16 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:11.977 13:22:16 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:11.977 13:22:16 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:11.977 13:22:16 -- common/autotest_common.sh@363 -- # avails["$mount"]=6265163776 00:08:11.977 13:22:16 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266421248 00:08:11.977 13:22:16 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:08:11.977 13:22:16 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:11.977 13:22:16 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:11.977 13:22:16 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:11.977 13:22:16 -- common/autotest_common.sh@363 -- # avails["$mount"]=2493755392 00:08:11.977 13:22:16 -- common/autotest_common.sh@363 -- # sizes["$mount"]=2506571776 00:08:11.977 13:22:16 -- common/autotest_common.sh@364 -- # uses["$mount"]=12816384 00:08:11.977 13:22:16 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:11.977 13:22:16 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:08:11.977 13:22:16 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:08:11.977 13:22:16 -- common/autotest_common.sh@363 -- # avails["$mount"]=13293776896 00:08:11.977 13:22:16 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:08:11.977 13:22:16 -- common/autotest_common.sh@364 -- # uses["$mount"]=6289739776 00:08:11.977 13:22:16 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:11.977 13:22:16 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda2 00:08:11.977 13:22:16 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:08:11.977 13:22:16 -- common/autotest_common.sh@363 -- # avails["$mount"]=840085504 00:08:11.977 13:22:16 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1012768768 00:08:11.977 13:22:16 -- common/autotest_common.sh@364 -- # uses["$mount"]=103477248 00:08:11.977 13:22:16 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:11.977 13:22:16 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:11.977 13:22:16 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:11.977 13:22:16 -- common/autotest_common.sh@363 -- # avails["$mount"]=6266281984 00:08:11.977 13:22:16 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266421248 00:08:11.977 13:22:16 -- common/autotest_common.sh@364 -- # uses["$mount"]=139264 00:08:11.977 13:22:16 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:11.977 13:22:16 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda3 00:08:11.977 13:22:16 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:08:11.977 13:22:16 -- common/autotest_common.sh@363 -- # avails["$mount"]=91617280 00:08:11.977 13:22:16 -- common/autotest_common.sh@363 -- # sizes["$mount"]=104607744 00:08:11.977 13:22:16 -- common/autotest_common.sh@364 -- # uses["$mount"]=12990464 00:08:11.977 13:22:16 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:11.977 13:22:16 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:11.977 13:22:16 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:11.977 13:22:16 -- common/autotest_common.sh@363 -- # avails["$mount"]=1253269504 00:08:11.977 13:22:16 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253281792 00:08:11.977 13:22:16 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:08:11.977 13:22:16 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:11.977 13:22:16 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:08:11.977 13:22:16 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:08:11.977 13:22:16 -- common/autotest_common.sh@363 -- # avails["$mount"]=97215946752 00:08:11.977 13:22:16 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:08:11.977 13:22:16 -- common/autotest_common.sh@364 -- # uses["$mount"]=2486833152 00:08:11.977 13:22:16 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:11.977 13:22:16 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:08:11.977 * Looking for test storage... 00:08:11.977 13:22:16 -- common/autotest_common.sh@369 -- # local target_space new_size 00:08:11.977 13:22:16 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:08:11.977 13:22:16 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:11.977 13:22:16 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:11.977 13:22:16 -- common/autotest_common.sh@373 -- # mount=/home 00:08:11.977 13:22:16 -- common/autotest_common.sh@375 -- # target_space=13293776896 00:08:11.977 13:22:16 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:08:11.977 13:22:16 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:08:11.977 13:22:16 -- common/autotest_common.sh@381 -- # [[ btrfs == tmpfs ]] 00:08:11.977 13:22:16 -- common/autotest_common.sh@381 -- # [[ btrfs == ramfs ]] 00:08:11.977 13:22:16 -- common/autotest_common.sh@381 -- # [[ /home == / ]] 00:08:11.977 13:22:16 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:11.977 13:22:16 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:11.977 13:22:16 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:11.977 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:11.977 13:22:16 -- common/autotest_common.sh@390 -- # return 0 00:08:11.977 13:22:16 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:08:11.977 13:22:16 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:08:11.977 13:22:16 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:11.977 13:22:16 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:11.977 13:22:16 -- common/autotest_common.sh@1682 -- # true 00:08:11.977 13:22:16 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:08:11.977 13:22:16 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:11.977 13:22:16 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:11.977 13:22:16 -- common/autotest_common.sh@27 -- # exec 00:08:11.977 13:22:16 -- common/autotest_common.sh@29 -- # exec 00:08:11.977 13:22:16 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:11.977 13:22:16 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:11.977 13:22:16 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:11.977 13:22:16 -- common/autotest_common.sh@18 -- # set -x 00:08:11.977 13:22:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:11.977 13:22:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:11.977 13:22:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:11.977 13:22:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:11.977 13:22:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:11.977 13:22:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:11.977 13:22:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:11.977 13:22:16 -- scripts/common.sh@335 -- # IFS=.-: 00:08:11.977 13:22:16 -- scripts/common.sh@335 -- # read -ra ver1 00:08:11.978 13:22:16 -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.978 13:22:16 -- scripts/common.sh@336 -- # read -ra ver2 00:08:11.978 13:22:16 -- scripts/common.sh@337 -- # local 'op=<' 00:08:11.978 13:22:16 -- scripts/common.sh@339 -- # ver1_l=2 00:08:11.978 13:22:16 -- scripts/common.sh@340 -- # ver2_l=1 00:08:11.978 13:22:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:11.978 13:22:16 -- scripts/common.sh@343 -- # case "$op" in 00:08:11.978 13:22:16 -- scripts/common.sh@344 -- # : 1 00:08:11.978 13:22:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:11.978 13:22:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.978 13:22:16 -- scripts/common.sh@364 -- # decimal 1 00:08:11.978 13:22:16 -- scripts/common.sh@352 -- # local d=1 00:08:11.978 13:22:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.978 13:22:16 -- scripts/common.sh@354 -- # echo 1 00:08:11.978 13:22:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:11.978 13:22:16 -- scripts/common.sh@365 -- # decimal 2 00:08:11.978 13:22:16 -- scripts/common.sh@352 -- # local d=2 00:08:11.978 13:22:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.978 13:22:16 -- scripts/common.sh@354 -- # echo 2 00:08:11.978 13:22:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:11.978 13:22:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:11.978 13:22:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:11.978 13:22:16 -- scripts/common.sh@367 -- # return 0 00:08:11.978 13:22:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.978 13:22:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:11.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.978 --rc genhtml_branch_coverage=1 00:08:11.978 --rc genhtml_function_coverage=1 00:08:11.978 --rc genhtml_legend=1 00:08:11.978 --rc geninfo_all_blocks=1 00:08:11.978 --rc geninfo_unexecuted_blocks=1 00:08:11.978 00:08:11.978 ' 00:08:11.978 13:22:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:11.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.978 --rc genhtml_branch_coverage=1 00:08:11.978 --rc genhtml_function_coverage=1 00:08:11.978 --rc genhtml_legend=1 00:08:11.978 --rc geninfo_all_blocks=1 00:08:11.978 --rc geninfo_unexecuted_blocks=1 00:08:11.978 00:08:11.978 ' 00:08:11.978 13:22:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:11.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.978 --rc genhtml_branch_coverage=1 00:08:11.978 --rc genhtml_function_coverage=1 00:08:11.978 --rc genhtml_legend=1 00:08:11.978 --rc geninfo_all_blocks=1 00:08:11.978 --rc geninfo_unexecuted_blocks=1 00:08:11.978 00:08:11.978 ' 00:08:11.978 13:22:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:11.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.978 --rc genhtml_branch_coverage=1 00:08:11.978 --rc genhtml_function_coverage=1 00:08:11.978 --rc genhtml_legend=1 00:08:11.978 --rc geninfo_all_blocks=1 00:08:11.978 --rc geninfo_unexecuted_blocks=1 00:08:11.978 00:08:11.978 ' 00:08:11.978 13:22:16 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:11.978 13:22:16 -- nvmf/common.sh@7 -- # uname -s 00:08:11.978 13:22:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.978 13:22:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.978 13:22:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.978 13:22:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.978 13:22:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.978 13:22:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.978 13:22:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.978 13:22:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.978 13:22:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.978 13:22:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.978 13:22:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:08:11.978 13:22:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:08:11.978 13:22:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.978 13:22:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.978 13:22:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:11.978 13:22:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:11.978 13:22:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.978 13:22:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.978 13:22:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.978 13:22:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.978 13:22:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.978 13:22:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.978 13:22:16 -- paths/export.sh@5 -- # export PATH 00:08:11.978 13:22:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.978 13:22:16 -- nvmf/common.sh@46 -- # : 0 00:08:11.978 13:22:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:11.978 13:22:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:11.978 13:22:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:11.978 13:22:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.978 13:22:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.978 13:22:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:11.978 13:22:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:11.978 13:22:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:11.978 13:22:16 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:11.978 13:22:16 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:11.978 13:22:16 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:11.978 13:22:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:11.978 13:22:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.978 13:22:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:11.978 13:22:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:11.978 13:22:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:11.978 13:22:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.978 13:22:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.978 13:22:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.978 13:22:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:11.978 13:22:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:11.978 13:22:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:11.978 13:22:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:11.978 13:22:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:11.978 13:22:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:11.978 13:22:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:11.978 13:22:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:11.978 13:22:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:11.978 13:22:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:11.978 13:22:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:11.978 13:22:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:11.978 13:22:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:11.978 13:22:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:11.978 13:22:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:11.978 13:22:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:11.978 13:22:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:11.978 13:22:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:11.979 13:22:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:11.979 13:22:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:11.979 Cannot find device "nvmf_tgt_br" 00:08:11.979 13:22:16 -- nvmf/common.sh@154 -- # true 00:08:11.979 13:22:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:11.979 Cannot find device "nvmf_tgt_br2" 00:08:11.979 13:22:16 -- nvmf/common.sh@155 -- # true 00:08:11.979 13:22:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:11.979 13:22:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:11.979 Cannot find device "nvmf_tgt_br" 00:08:11.979 13:22:16 -- nvmf/common.sh@157 -- # true 00:08:11.979 13:22:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:11.979 Cannot find device "nvmf_tgt_br2" 00:08:11.979 13:22:16 -- nvmf/common.sh@158 -- # true 00:08:11.979 13:22:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:11.979 13:22:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:11.979 13:22:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:11.979 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:11.979 13:22:16 -- nvmf/common.sh@161 -- # true 00:08:11.979 13:22:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:11.979 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:11.979 13:22:16 -- nvmf/common.sh@162 -- # true 00:08:11.979 13:22:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:11.979 13:22:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:11.979 13:22:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:11.979 13:22:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:11.979 13:22:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:11.979 13:22:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:11.979 13:22:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:11.979 13:22:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:11.979 13:22:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:11.979 13:22:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:11.979 13:22:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:11.979 13:22:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:11.979 13:22:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:11.979 13:22:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:11.979 13:22:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:11.979 13:22:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:11.979 13:22:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:11.979 13:22:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:11.979 13:22:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:11.979 13:22:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:11.979 13:22:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:11.979 13:22:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:11.979 13:22:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:11.979 13:22:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:11.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:11.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:08:11.979 00:08:11.979 --- 10.0.0.2 ping statistics --- 00:08:11.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.979 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:08:11.979 13:22:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:11.979 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:11.979 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:08:11.979 00:08:11.979 --- 10.0.0.3 ping statistics --- 00:08:11.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.979 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:08:11.979 13:22:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:11.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:11.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:11.979 00:08:11.979 --- 10.0.0.1 ping statistics --- 00:08:11.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.979 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:11.979 13:22:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:11.979 13:22:16 -- nvmf/common.sh@421 -- # return 0 00:08:11.979 13:22:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:11.979 13:22:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:11.979 13:22:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:11.979 13:22:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:11.979 13:22:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:11.979 13:22:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:11.979 13:22:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:11.979 13:22:16 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:11.979 13:22:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:11.979 13:22:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:11.979 13:22:16 -- common/autotest_common.sh@10 -- # set +x 00:08:11.979 ************************************ 00:08:11.979 START TEST nvmf_filesystem_no_in_capsule 00:08:11.979 ************************************ 00:08:11.979 13:22:16 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 0 00:08:11.979 13:22:16 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:11.979 13:22:16 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:11.979 13:22:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:11.979 13:22:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:11.979 13:22:16 -- common/autotest_common.sh@10 -- # set +x 00:08:11.979 13:22:16 -- nvmf/common.sh@469 -- # nvmfpid=72461 00:08:11.979 13:22:16 -- nvmf/common.sh@470 -- # waitforlisten 72461 00:08:11.979 13:22:16 -- common/autotest_common.sh@829 -- # '[' -z 72461 ']' 00:08:11.979 13:22:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.979 13:22:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:11.979 13:22:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:11.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.979 13:22:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.979 13:22:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:11.979 13:22:16 -- common/autotest_common.sh@10 -- # set +x 00:08:11.979 [2024-12-15 13:22:16.742912] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:11.979 [2024-12-15 13:22:16.743046] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.979 [2024-12-15 13:22:16.900662] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:11.979 [2024-12-15 13:22:16.980201] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:11.979 [2024-12-15 13:22:16.980377] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:11.979 [2024-12-15 13:22:16.980392] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:11.979 [2024-12-15 13:22:16.980403] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:11.979 [2024-12-15 13:22:16.980617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.979 [2024-12-15 13:22:16.980759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:11.979 [2024-12-15 13:22:16.981365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:11.979 [2024-12-15 13:22:16.981372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.238 13:22:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:12.238 13:22:17 -- common/autotest_common.sh@862 -- # return 0 00:08:12.238 13:22:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:12.238 13:22:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:12.238 13:22:17 -- common/autotest_common.sh@10 -- # set +x 00:08:12.238 13:22:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.238 13:22:17 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:12.238 13:22:17 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:12.238 13:22:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.238 13:22:17 -- common/autotest_common.sh@10 -- # set +x 00:08:12.239 [2024-12-15 13:22:17.820650] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.239 13:22:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.239 13:22:17 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:12.239 13:22:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.239 13:22:17 -- common/autotest_common.sh@10 -- # set +x 00:08:12.497 Malloc1 00:08:12.497 13:22:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.497 13:22:17 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:12.497 13:22:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.497 13:22:17 -- common/autotest_common.sh@10 -- # set +x 00:08:12.497 13:22:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.497 13:22:17 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:12.497 13:22:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.497 13:22:17 -- common/autotest_common.sh@10 -- # set +x 00:08:12.497 13:22:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.497 13:22:18 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:12.497 13:22:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.497 13:22:18 -- common/autotest_common.sh@10 -- # set +x 00:08:12.497 [2024-12-15 13:22:18.004982] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.497 13:22:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.497 13:22:18 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:12.497 13:22:18 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:08:12.497 13:22:18 -- common/autotest_common.sh@1368 -- # local bdev_info 00:08:12.497 13:22:18 -- common/autotest_common.sh@1369 -- # local bs 00:08:12.497 13:22:18 -- common/autotest_common.sh@1370 -- # local nb 00:08:12.497 13:22:18 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:12.497 13:22:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.497 13:22:18 -- common/autotest_common.sh@10 -- # set +x 00:08:12.497 13:22:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.497 13:22:18 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:08:12.497 { 00:08:12.497 "aliases": [ 00:08:12.497 "811666cd-9c40-4284-862c-9b1fe1cbaa9f" 00:08:12.497 ], 00:08:12.497 "assigned_rate_limits": { 00:08:12.497 "r_mbytes_per_sec": 0, 00:08:12.497 "rw_ios_per_sec": 0, 00:08:12.497 "rw_mbytes_per_sec": 0, 00:08:12.497 "w_mbytes_per_sec": 0 00:08:12.497 }, 00:08:12.497 "block_size": 512, 00:08:12.497 "claim_type": "exclusive_write", 00:08:12.497 "claimed": true, 00:08:12.497 "driver_specific": {}, 00:08:12.497 "memory_domains": [ 00:08:12.497 { 00:08:12.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.497 "dma_device_type": 2 00:08:12.497 } 00:08:12.497 ], 00:08:12.497 "name": "Malloc1", 00:08:12.497 "num_blocks": 1048576, 00:08:12.497 "product_name": "Malloc disk", 00:08:12.497 "supported_io_types": { 00:08:12.497 "abort": true, 00:08:12.497 "compare": false, 00:08:12.497 "compare_and_write": false, 00:08:12.497 "flush": true, 00:08:12.497 "nvme_admin": false, 00:08:12.497 "nvme_io": false, 00:08:12.497 "read": true, 00:08:12.497 "reset": true, 00:08:12.497 "unmap": true, 00:08:12.497 "write": true, 00:08:12.497 "write_zeroes": true 00:08:12.497 }, 00:08:12.497 "uuid": "811666cd-9c40-4284-862c-9b1fe1cbaa9f", 00:08:12.498 "zoned": false 00:08:12.498 } 00:08:12.498 ]' 00:08:12.498 13:22:18 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:08:12.498 13:22:18 -- common/autotest_common.sh@1372 -- # bs=512 00:08:12.498 13:22:18 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:08:12.498 13:22:18 -- common/autotest_common.sh@1373 -- # nb=1048576 00:08:12.498 13:22:18 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:08:12.498 13:22:18 -- common/autotest_common.sh@1377 -- # echo 512 00:08:12.498 13:22:18 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:12.498 13:22:18 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:12.756 13:22:18 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:12.756 13:22:18 -- common/autotest_common.sh@1187 -- # local i=0 00:08:12.756 13:22:18 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:12.756 13:22:18 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:12.756 13:22:18 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:14.657 13:22:20 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:14.657 13:22:20 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:14.657 13:22:20 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:14.657 13:22:20 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:14.657 13:22:20 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:14.657 13:22:20 -- common/autotest_common.sh@1197 -- # return 0 00:08:14.657 13:22:20 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:14.657 13:22:20 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:14.657 13:22:20 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:14.657 13:22:20 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:14.657 13:22:20 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:14.658 13:22:20 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:14.658 13:22:20 -- setup/common.sh@80 -- # echo 536870912 00:08:14.658 13:22:20 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:14.658 13:22:20 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:14.658 13:22:20 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:14.658 13:22:20 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:14.916 13:22:20 -- target/filesystem.sh@69 -- # partprobe 00:08:14.916 13:22:20 -- target/filesystem.sh@70 -- # sleep 1 00:08:15.851 13:22:21 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:15.851 13:22:21 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:15.851 13:22:21 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:15.851 13:22:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:15.851 13:22:21 -- common/autotest_common.sh@10 -- # set +x 00:08:15.851 ************************************ 00:08:15.851 START TEST filesystem_ext4 00:08:15.851 ************************************ 00:08:15.851 13:22:21 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:15.851 13:22:21 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:15.851 13:22:21 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:15.851 13:22:21 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:15.851 13:22:21 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:15.851 13:22:21 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:15.851 13:22:21 -- common/autotest_common.sh@914 -- # local i=0 00:08:15.851 13:22:21 -- common/autotest_common.sh@915 -- # local force 00:08:15.851 13:22:21 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:15.851 13:22:21 -- common/autotest_common.sh@918 -- # force=-F 00:08:15.851 13:22:21 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:15.851 mke2fs 1.47.0 (5-Feb-2023) 00:08:16.109 Discarding device blocks: 0/522240 done 00:08:16.109 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:16.109 Filesystem UUID: 6a759ef3-aa93-4d16-85ea-346ce1a350d5 00:08:16.109 Superblock backups stored on blocks: 00:08:16.109 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:16.109 00:08:16.109 Allocating group tables: 0/64 done 00:08:16.109 Writing inode tables: 0/64 done 00:08:16.109 Creating journal (8192 blocks): done 00:08:16.109 Writing superblocks and filesystem accounting information: 0/64 done 00:08:16.109 00:08:16.109 13:22:21 -- common/autotest_common.sh@931 -- # return 0 00:08:16.109 13:22:21 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:21.374 13:22:26 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:21.374 13:22:26 -- target/filesystem.sh@25 -- # sync 00:08:21.632 13:22:27 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:21.632 13:22:27 -- target/filesystem.sh@27 -- # sync 00:08:21.632 13:22:27 -- target/filesystem.sh@29 -- # i=0 00:08:21.632 13:22:27 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:21.632 13:22:27 -- target/filesystem.sh@37 -- # kill -0 72461 00:08:21.632 13:22:27 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:21.632 13:22:27 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:21.632 13:22:27 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:21.632 13:22:27 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:21.632 00:08:21.632 real 0m5.619s 00:08:21.632 user 0m0.027s 00:08:21.632 sys 0m0.068s 00:08:21.632 13:22:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:21.632 13:22:27 -- common/autotest_common.sh@10 -- # set +x 00:08:21.632 ************************************ 00:08:21.632 END TEST filesystem_ext4 00:08:21.632 ************************************ 00:08:21.632 13:22:27 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:21.632 13:22:27 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:21.632 13:22:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:21.632 13:22:27 -- common/autotest_common.sh@10 -- # set +x 00:08:21.632 ************************************ 00:08:21.632 START TEST filesystem_btrfs 00:08:21.632 ************************************ 00:08:21.632 13:22:27 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:21.632 13:22:27 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:21.632 13:22:27 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:21.632 13:22:27 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:21.632 13:22:27 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:21.632 13:22:27 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:21.632 13:22:27 -- common/autotest_common.sh@914 -- # local i=0 00:08:21.632 13:22:27 -- common/autotest_common.sh@915 -- # local force 00:08:21.632 13:22:27 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:21.632 13:22:27 -- common/autotest_common.sh@920 -- # force=-f 00:08:21.632 13:22:27 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:21.891 btrfs-progs v6.8.1 00:08:21.891 See https://btrfs.readthedocs.io for more information. 00:08:21.891 00:08:21.891 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:21.891 NOTE: several default settings have changed in version 5.15, please make sure 00:08:21.891 this does not affect your deployments: 00:08:21.891 - DUP for metadata (-m dup) 00:08:21.891 - enabled no-holes (-O no-holes) 00:08:21.891 - enabled free-space-tree (-R free-space-tree) 00:08:21.891 00:08:21.891 Label: (null) 00:08:21.891 UUID: b1e7958b-e75f-4bfd-8c7e-b914979ed875 00:08:21.891 Node size: 16384 00:08:21.891 Sector size: 4096 (CPU page size: 4096) 00:08:21.891 Filesystem size: 510.00MiB 00:08:21.891 Block group profiles: 00:08:21.891 Data: single 8.00MiB 00:08:21.891 Metadata: DUP 32.00MiB 00:08:21.891 System: DUP 8.00MiB 00:08:21.891 SSD detected: yes 00:08:21.891 Zoned device: no 00:08:21.891 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:21.891 Checksum: crc32c 00:08:21.891 Number of devices: 1 00:08:21.891 Devices: 00:08:21.891 ID SIZE PATH 00:08:21.891 1 510.00MiB /dev/nvme0n1p1 00:08:21.891 00:08:21.891 13:22:27 -- common/autotest_common.sh@931 -- # return 0 00:08:21.891 13:22:27 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:21.891 13:22:27 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:21.891 13:22:27 -- target/filesystem.sh@25 -- # sync 00:08:21.891 13:22:27 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:21.891 13:22:27 -- target/filesystem.sh@27 -- # sync 00:08:21.891 13:22:27 -- target/filesystem.sh@29 -- # i=0 00:08:21.891 13:22:27 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:21.891 13:22:27 -- target/filesystem.sh@37 -- # kill -0 72461 00:08:21.891 13:22:27 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:21.891 13:22:27 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:21.891 13:22:27 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:21.891 13:22:27 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:21.891 00:08:21.891 real 0m0.251s 00:08:21.891 user 0m0.023s 00:08:21.891 sys 0m0.055s 00:08:21.891 13:22:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:21.891 13:22:27 -- common/autotest_common.sh@10 -- # set +x 00:08:21.891 ************************************ 00:08:21.891 END TEST filesystem_btrfs 00:08:21.891 ************************************ 00:08:21.891 13:22:27 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:21.891 13:22:27 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:21.891 13:22:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:21.891 13:22:27 -- common/autotest_common.sh@10 -- # set +x 00:08:21.891 ************************************ 00:08:21.891 START TEST filesystem_xfs 00:08:21.891 ************************************ 00:08:21.891 13:22:27 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:21.891 13:22:27 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:21.891 13:22:27 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:21.891 13:22:27 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:21.891 13:22:27 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:21.891 13:22:27 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:21.891 13:22:27 -- common/autotest_common.sh@914 -- # local i=0 00:08:21.891 13:22:27 -- common/autotest_common.sh@915 -- # local force 00:08:21.891 13:22:27 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:21.891 13:22:27 -- common/autotest_common.sh@920 -- # force=-f 00:08:21.891 13:22:27 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:22.149 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:22.149 = sectsz=512 attr=2, projid32bit=1 00:08:22.149 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:22.149 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:22.149 data = bsize=4096 blocks=130560, imaxpct=25 00:08:22.149 = sunit=0 swidth=0 blks 00:08:22.149 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:22.149 log =internal log bsize=4096 blocks=16384, version=2 00:08:22.149 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:22.149 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:22.715 Discarding blocks...Done. 00:08:22.715 13:22:28 -- common/autotest_common.sh@931 -- # return 0 00:08:22.715 13:22:28 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:25.279 13:22:30 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:25.279 13:22:30 -- target/filesystem.sh@25 -- # sync 00:08:25.279 13:22:30 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:25.279 13:22:30 -- target/filesystem.sh@27 -- # sync 00:08:25.279 13:22:30 -- target/filesystem.sh@29 -- # i=0 00:08:25.279 13:22:30 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:25.279 13:22:30 -- target/filesystem.sh@37 -- # kill -0 72461 00:08:25.279 13:22:30 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:25.279 13:22:30 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:25.279 13:22:30 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:25.279 13:22:30 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:25.279 00:08:25.279 real 0m3.209s 00:08:25.279 user 0m0.020s 00:08:25.279 sys 0m0.057s 00:08:25.279 13:22:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:25.279 13:22:30 -- common/autotest_common.sh@10 -- # set +x 00:08:25.279 ************************************ 00:08:25.279 END TEST filesystem_xfs 00:08:25.279 ************************************ 00:08:25.279 13:22:30 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:25.279 13:22:30 -- target/filesystem.sh@93 -- # sync 00:08:25.279 13:22:30 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:25.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:25.279 13:22:30 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:25.279 13:22:30 -- common/autotest_common.sh@1208 -- # local i=0 00:08:25.279 13:22:30 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:25.279 13:22:30 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:25.279 13:22:30 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:25.279 13:22:30 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:25.279 13:22:30 -- common/autotest_common.sh@1220 -- # return 0 00:08:25.279 13:22:30 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:25.279 13:22:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.279 13:22:30 -- common/autotest_common.sh@10 -- # set +x 00:08:25.279 13:22:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.279 13:22:30 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:25.279 13:22:30 -- target/filesystem.sh@101 -- # killprocess 72461 00:08:25.279 13:22:30 -- common/autotest_common.sh@936 -- # '[' -z 72461 ']' 00:08:25.279 13:22:30 -- common/autotest_common.sh@940 -- # kill -0 72461 00:08:25.279 13:22:30 -- common/autotest_common.sh@941 -- # uname 00:08:25.279 13:22:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:25.279 13:22:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72461 00:08:25.280 13:22:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:25.280 13:22:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:25.280 killing process with pid 72461 00:08:25.280 13:22:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72461' 00:08:25.280 13:22:30 -- common/autotest_common.sh@955 -- # kill 72461 00:08:25.280 13:22:30 -- common/autotest_common.sh@960 -- # wait 72461 00:08:25.846 13:22:31 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:25.846 00:08:25.846 real 0m14.627s 00:08:25.846 user 0m55.873s 00:08:25.846 sys 0m2.181s 00:08:25.846 13:22:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:25.846 13:22:31 -- common/autotest_common.sh@10 -- # set +x 00:08:25.846 ************************************ 00:08:25.846 END TEST nvmf_filesystem_no_in_capsule 00:08:25.846 ************************************ 00:08:25.846 13:22:31 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:25.846 13:22:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:25.846 13:22:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:25.846 13:22:31 -- common/autotest_common.sh@10 -- # set +x 00:08:25.846 ************************************ 00:08:25.846 START TEST nvmf_filesystem_in_capsule 00:08:25.846 ************************************ 00:08:25.846 13:22:31 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 4096 00:08:25.846 13:22:31 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:25.846 13:22:31 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:25.846 13:22:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:25.846 13:22:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:25.846 13:22:31 -- common/autotest_common.sh@10 -- # set +x 00:08:25.846 13:22:31 -- nvmf/common.sh@469 -- # nvmfpid=72839 00:08:25.846 13:22:31 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:25.846 13:22:31 -- nvmf/common.sh@470 -- # waitforlisten 72839 00:08:25.846 13:22:31 -- common/autotest_common.sh@829 -- # '[' -z 72839 ']' 00:08:25.846 13:22:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.846 13:22:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:25.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.846 13:22:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.846 13:22:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:25.846 13:22:31 -- common/autotest_common.sh@10 -- # set +x 00:08:25.846 [2024-12-15 13:22:31.386927] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:25.846 [2024-12-15 13:22:31.387010] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.846 [2024-12-15 13:22:31.522962] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:26.105 [2024-12-15 13:22:31.589677] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:26.105 [2024-12-15 13:22:31.589896] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:26.105 [2024-12-15 13:22:31.589909] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:26.105 [2024-12-15 13:22:31.589932] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:26.105 [2024-12-15 13:22:31.590071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.105 [2024-12-15 13:22:31.590437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.105 [2024-12-15 13:22:31.590885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.105 [2024-12-15 13:22:31.590938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.040 13:22:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:27.040 13:22:32 -- common/autotest_common.sh@862 -- # return 0 00:08:27.040 13:22:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:27.040 13:22:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:27.040 13:22:32 -- common/autotest_common.sh@10 -- # set +x 00:08:27.040 13:22:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:27.040 13:22:32 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:27.040 13:22:32 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:27.040 13:22:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.040 13:22:32 -- common/autotest_common.sh@10 -- # set +x 00:08:27.040 [2024-12-15 13:22:32.478765] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:27.040 13:22:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.040 13:22:32 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:27.040 13:22:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.040 13:22:32 -- common/autotest_common.sh@10 -- # set +x 00:08:27.040 Malloc1 00:08:27.040 13:22:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.040 13:22:32 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:27.040 13:22:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.040 13:22:32 -- common/autotest_common.sh@10 -- # set +x 00:08:27.040 13:22:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.040 13:22:32 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:27.040 13:22:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.040 13:22:32 -- common/autotest_common.sh@10 -- # set +x 00:08:27.040 13:22:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.040 13:22:32 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:27.040 13:22:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.040 13:22:32 -- common/autotest_common.sh@10 -- # set +x 00:08:27.040 [2024-12-15 13:22:32.678085] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:27.040 13:22:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.040 13:22:32 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:27.040 13:22:32 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:08:27.040 13:22:32 -- common/autotest_common.sh@1368 -- # local bdev_info 00:08:27.040 13:22:32 -- common/autotest_common.sh@1369 -- # local bs 00:08:27.040 13:22:32 -- common/autotest_common.sh@1370 -- # local nb 00:08:27.040 13:22:32 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:27.040 13:22:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.040 13:22:32 -- common/autotest_common.sh@10 -- # set +x 00:08:27.040 13:22:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.040 13:22:32 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:08:27.040 { 00:08:27.040 "aliases": [ 00:08:27.040 "56e29e71-9137-4b91-8773-22b3ef591052" 00:08:27.040 ], 00:08:27.040 "assigned_rate_limits": { 00:08:27.040 "r_mbytes_per_sec": 0, 00:08:27.040 "rw_ios_per_sec": 0, 00:08:27.040 "rw_mbytes_per_sec": 0, 00:08:27.040 "w_mbytes_per_sec": 0 00:08:27.040 }, 00:08:27.040 "block_size": 512, 00:08:27.040 "claim_type": "exclusive_write", 00:08:27.040 "claimed": true, 00:08:27.040 "driver_specific": {}, 00:08:27.040 "memory_domains": [ 00:08:27.040 { 00:08:27.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.040 "dma_device_type": 2 00:08:27.040 } 00:08:27.040 ], 00:08:27.040 "name": "Malloc1", 00:08:27.040 "num_blocks": 1048576, 00:08:27.040 "product_name": "Malloc disk", 00:08:27.040 "supported_io_types": { 00:08:27.040 "abort": true, 00:08:27.040 "compare": false, 00:08:27.040 "compare_and_write": false, 00:08:27.040 "flush": true, 00:08:27.040 "nvme_admin": false, 00:08:27.040 "nvme_io": false, 00:08:27.040 "read": true, 00:08:27.040 "reset": true, 00:08:27.040 "unmap": true, 00:08:27.040 "write": true, 00:08:27.040 "write_zeroes": true 00:08:27.040 }, 00:08:27.040 "uuid": "56e29e71-9137-4b91-8773-22b3ef591052", 00:08:27.040 "zoned": false 00:08:27.040 } 00:08:27.040 ]' 00:08:27.040 13:22:32 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:08:27.299 13:22:32 -- common/autotest_common.sh@1372 -- # bs=512 00:08:27.299 13:22:32 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:08:27.299 13:22:32 -- common/autotest_common.sh@1373 -- # nb=1048576 00:08:27.299 13:22:32 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:08:27.299 13:22:32 -- common/autotest_common.sh@1377 -- # echo 512 00:08:27.299 13:22:32 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:27.299 13:22:32 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:27.299 13:22:32 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:27.299 13:22:32 -- common/autotest_common.sh@1187 -- # local i=0 00:08:27.299 13:22:32 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:27.299 13:22:32 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:27.299 13:22:32 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:29.829 13:22:34 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:29.829 13:22:34 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:29.829 13:22:34 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:29.829 13:22:34 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:29.829 13:22:34 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:29.829 13:22:34 -- common/autotest_common.sh@1197 -- # return 0 00:08:29.829 13:22:34 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:29.829 13:22:34 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:29.829 13:22:35 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:29.829 13:22:35 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:29.829 13:22:35 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:29.829 13:22:35 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:29.829 13:22:35 -- setup/common.sh@80 -- # echo 536870912 00:08:29.829 13:22:35 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:29.829 13:22:35 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:29.829 13:22:35 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:29.829 13:22:35 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:29.829 13:22:35 -- target/filesystem.sh@69 -- # partprobe 00:08:29.829 13:22:35 -- target/filesystem.sh@70 -- # sleep 1 00:08:30.763 13:22:36 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:30.763 13:22:36 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:30.763 13:22:36 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:30.763 13:22:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:30.763 13:22:36 -- common/autotest_common.sh@10 -- # set +x 00:08:30.763 ************************************ 00:08:30.763 START TEST filesystem_in_capsule_ext4 00:08:30.763 ************************************ 00:08:30.763 13:22:36 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:30.763 13:22:36 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:30.763 13:22:36 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:30.763 13:22:36 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:30.763 13:22:36 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:30.763 13:22:36 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:30.763 13:22:36 -- common/autotest_common.sh@914 -- # local i=0 00:08:30.763 13:22:36 -- common/autotest_common.sh@915 -- # local force 00:08:30.763 13:22:36 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:30.763 13:22:36 -- common/autotest_common.sh@918 -- # force=-F 00:08:30.763 13:22:36 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:30.763 mke2fs 1.47.0 (5-Feb-2023) 00:08:30.763 Discarding device blocks: 0/522240 done 00:08:30.763 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:30.763 Filesystem UUID: 091c0761-039e-47a0-bf37-a35d7d2a70bd 00:08:30.763 Superblock backups stored on blocks: 00:08:30.763 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:30.763 00:08:30.763 Allocating group tables: 0/64 done 00:08:30.763 Writing inode tables: 0/64 done 00:08:30.763 Creating journal (8192 blocks): done 00:08:30.763 Writing superblocks and filesystem accounting information: 0/64 done 00:08:30.763 00:08:30.763 13:22:36 -- common/autotest_common.sh@931 -- # return 0 00:08:30.763 13:22:36 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:36.028 13:22:41 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:36.028 13:22:41 -- target/filesystem.sh@25 -- # sync 00:08:36.286 13:22:41 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:36.286 13:22:41 -- target/filesystem.sh@27 -- # sync 00:08:36.286 13:22:41 -- target/filesystem.sh@29 -- # i=0 00:08:36.286 13:22:41 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:36.286 13:22:41 -- target/filesystem.sh@37 -- # kill -0 72839 00:08:36.286 13:22:41 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:36.286 13:22:41 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:36.286 13:22:41 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:36.286 13:22:41 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:36.286 00:08:36.286 real 0m5.636s 00:08:36.286 user 0m0.024s 00:08:36.286 sys 0m0.064s 00:08:36.286 13:22:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:36.286 13:22:41 -- common/autotest_common.sh@10 -- # set +x 00:08:36.286 ************************************ 00:08:36.286 END TEST filesystem_in_capsule_ext4 00:08:36.286 ************************************ 00:08:36.286 13:22:41 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:36.286 13:22:41 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:36.286 13:22:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:36.286 13:22:41 -- common/autotest_common.sh@10 -- # set +x 00:08:36.286 ************************************ 00:08:36.286 START TEST filesystem_in_capsule_btrfs 00:08:36.286 ************************************ 00:08:36.286 13:22:41 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:36.286 13:22:41 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:36.286 13:22:41 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:36.286 13:22:41 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:36.286 13:22:41 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:36.286 13:22:41 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:36.286 13:22:41 -- common/autotest_common.sh@914 -- # local i=0 00:08:36.286 13:22:41 -- common/autotest_common.sh@915 -- # local force 00:08:36.286 13:22:41 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:36.286 13:22:41 -- common/autotest_common.sh@920 -- # force=-f 00:08:36.286 13:22:41 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:36.544 btrfs-progs v6.8.1 00:08:36.544 See https://btrfs.readthedocs.io for more information. 00:08:36.544 00:08:36.544 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:36.544 NOTE: several default settings have changed in version 5.15, please make sure 00:08:36.544 this does not affect your deployments: 00:08:36.544 - DUP for metadata (-m dup) 00:08:36.544 - enabled no-holes (-O no-holes) 00:08:36.544 - enabled free-space-tree (-R free-space-tree) 00:08:36.544 00:08:36.544 Label: (null) 00:08:36.544 UUID: 710e7480-8c28-42ed-88ae-c03e937ca627 00:08:36.544 Node size: 16384 00:08:36.544 Sector size: 4096 (CPU page size: 4096) 00:08:36.544 Filesystem size: 510.00MiB 00:08:36.544 Block group profiles: 00:08:36.544 Data: single 8.00MiB 00:08:36.544 Metadata: DUP 32.00MiB 00:08:36.544 System: DUP 8.00MiB 00:08:36.544 SSD detected: yes 00:08:36.544 Zoned device: no 00:08:36.544 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:36.544 Checksum: crc32c 00:08:36.544 Number of devices: 1 00:08:36.544 Devices: 00:08:36.544 ID SIZE PATH 00:08:36.544 1 510.00MiB /dev/nvme0n1p1 00:08:36.544 00:08:36.544 13:22:42 -- common/autotest_common.sh@931 -- # return 0 00:08:36.544 13:22:42 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:36.544 13:22:42 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:36.544 13:22:42 -- target/filesystem.sh@25 -- # sync 00:08:36.544 13:22:42 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:36.544 13:22:42 -- target/filesystem.sh@27 -- # sync 00:08:36.544 13:22:42 -- target/filesystem.sh@29 -- # i=0 00:08:36.544 13:22:42 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:36.544 13:22:42 -- target/filesystem.sh@37 -- # kill -0 72839 00:08:36.544 13:22:42 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:36.544 13:22:42 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:36.544 13:22:42 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:36.544 13:22:42 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:36.544 00:08:36.544 real 0m0.276s 00:08:36.544 user 0m0.019s 00:08:36.544 sys 0m0.066s 00:08:36.544 13:22:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:36.544 13:22:42 -- common/autotest_common.sh@10 -- # set +x 00:08:36.544 ************************************ 00:08:36.544 END TEST filesystem_in_capsule_btrfs 00:08:36.544 ************************************ 00:08:36.544 13:22:42 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:36.544 13:22:42 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:36.544 13:22:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:36.544 13:22:42 -- common/autotest_common.sh@10 -- # set +x 00:08:36.544 ************************************ 00:08:36.544 START TEST filesystem_in_capsule_xfs 00:08:36.544 ************************************ 00:08:36.544 13:22:42 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:36.544 13:22:42 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:36.544 13:22:42 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:36.544 13:22:42 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:36.544 13:22:42 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:36.544 13:22:42 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:36.544 13:22:42 -- common/autotest_common.sh@914 -- # local i=0 00:08:36.544 13:22:42 -- common/autotest_common.sh@915 -- # local force 00:08:36.544 13:22:42 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:36.544 13:22:42 -- common/autotest_common.sh@920 -- # force=-f 00:08:36.544 13:22:42 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:36.802 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:36.802 = sectsz=512 attr=2, projid32bit=1 00:08:36.802 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:36.802 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:36.802 data = bsize=4096 blocks=130560, imaxpct=25 00:08:36.802 = sunit=0 swidth=0 blks 00:08:36.802 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:36.802 log =internal log bsize=4096 blocks=16384, version=2 00:08:36.802 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:36.802 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:37.367 Discarding blocks...Done. 00:08:37.367 13:22:42 -- common/autotest_common.sh@931 -- # return 0 00:08:37.367 13:22:42 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:39.267 13:22:44 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:39.267 13:22:44 -- target/filesystem.sh@25 -- # sync 00:08:39.267 13:22:44 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:39.267 13:22:44 -- target/filesystem.sh@27 -- # sync 00:08:39.267 13:22:44 -- target/filesystem.sh@29 -- # i=0 00:08:39.267 13:22:44 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:39.267 13:22:44 -- target/filesystem.sh@37 -- # kill -0 72839 00:08:39.267 13:22:44 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:39.267 13:22:44 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:39.267 13:22:44 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:39.267 13:22:44 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:39.267 00:08:39.267 real 0m2.625s 00:08:39.267 user 0m0.023s 00:08:39.267 sys 0m0.056s 00:08:39.267 13:22:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:39.267 13:22:44 -- common/autotest_common.sh@10 -- # set +x 00:08:39.267 ************************************ 00:08:39.267 END TEST filesystem_in_capsule_xfs 00:08:39.267 ************************************ 00:08:39.267 13:22:44 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:39.267 13:22:44 -- target/filesystem.sh@93 -- # sync 00:08:39.267 13:22:44 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:39.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.267 13:22:44 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:39.267 13:22:44 -- common/autotest_common.sh@1208 -- # local i=0 00:08:39.267 13:22:44 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:39.267 13:22:44 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:39.267 13:22:44 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:39.267 13:22:44 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:39.267 13:22:44 -- common/autotest_common.sh@1220 -- # return 0 00:08:39.267 13:22:44 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:39.267 13:22:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.267 13:22:44 -- common/autotest_common.sh@10 -- # set +x 00:08:39.267 13:22:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.267 13:22:44 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:39.267 13:22:44 -- target/filesystem.sh@101 -- # killprocess 72839 00:08:39.267 13:22:44 -- common/autotest_common.sh@936 -- # '[' -z 72839 ']' 00:08:39.267 13:22:44 -- common/autotest_common.sh@940 -- # kill -0 72839 00:08:39.267 13:22:44 -- common/autotest_common.sh@941 -- # uname 00:08:39.267 13:22:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:39.267 13:22:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72839 00:08:39.267 13:22:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:39.267 13:22:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:39.267 killing process with pid 72839 00:08:39.267 13:22:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72839' 00:08:39.267 13:22:44 -- common/autotest_common.sh@955 -- # kill 72839 00:08:39.267 13:22:44 -- common/autotest_common.sh@960 -- # wait 72839 00:08:39.833 13:22:45 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:39.833 00:08:39.833 real 0m14.027s 00:08:39.833 user 0m53.736s 00:08:39.833 sys 0m2.177s 00:08:39.833 13:22:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:39.833 13:22:45 -- common/autotest_common.sh@10 -- # set +x 00:08:39.833 ************************************ 00:08:39.833 END TEST nvmf_filesystem_in_capsule 00:08:39.833 ************************************ 00:08:39.833 13:22:45 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:39.833 13:22:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:39.833 13:22:45 -- nvmf/common.sh@116 -- # sync 00:08:39.833 13:22:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:39.833 13:22:45 -- nvmf/common.sh@119 -- # set +e 00:08:39.833 13:22:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:39.833 13:22:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:39.833 rmmod nvme_tcp 00:08:39.833 rmmod nvme_fabrics 00:08:39.833 rmmod nvme_keyring 00:08:39.833 13:22:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:39.833 13:22:45 -- nvmf/common.sh@123 -- # set -e 00:08:39.833 13:22:45 -- nvmf/common.sh@124 -- # return 0 00:08:39.833 13:22:45 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:39.833 13:22:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:39.833 13:22:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:39.833 13:22:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:39.833 13:22:45 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:39.833 13:22:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:39.833 13:22:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.833 13:22:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:39.833 13:22:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.091 13:22:45 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:40.091 00:08:40.091 real 0m29.641s 00:08:40.091 user 1m50.035s 00:08:40.091 sys 0m4.738s 00:08:40.091 13:22:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:40.091 13:22:45 -- common/autotest_common.sh@10 -- # set +x 00:08:40.091 ************************************ 00:08:40.091 END TEST nvmf_filesystem 00:08:40.091 ************************************ 00:08:40.091 13:22:45 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:40.091 13:22:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:40.091 13:22:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:40.091 13:22:45 -- common/autotest_common.sh@10 -- # set +x 00:08:40.091 ************************************ 00:08:40.091 START TEST nvmf_discovery 00:08:40.091 ************************************ 00:08:40.091 13:22:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:40.091 * Looking for test storage... 00:08:40.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:40.091 13:22:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:40.091 13:22:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:40.091 13:22:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:40.091 13:22:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:40.091 13:22:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:40.091 13:22:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:40.091 13:22:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:40.091 13:22:45 -- scripts/common.sh@335 -- # IFS=.-: 00:08:40.091 13:22:45 -- scripts/common.sh@335 -- # read -ra ver1 00:08:40.091 13:22:45 -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.091 13:22:45 -- scripts/common.sh@336 -- # read -ra ver2 00:08:40.091 13:22:45 -- scripts/common.sh@337 -- # local 'op=<' 00:08:40.091 13:22:45 -- scripts/common.sh@339 -- # ver1_l=2 00:08:40.091 13:22:45 -- scripts/common.sh@340 -- # ver2_l=1 00:08:40.091 13:22:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:40.091 13:22:45 -- scripts/common.sh@343 -- # case "$op" in 00:08:40.091 13:22:45 -- scripts/common.sh@344 -- # : 1 00:08:40.091 13:22:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:40.091 13:22:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.091 13:22:45 -- scripts/common.sh@364 -- # decimal 1 00:08:40.091 13:22:45 -- scripts/common.sh@352 -- # local d=1 00:08:40.091 13:22:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.091 13:22:45 -- scripts/common.sh@354 -- # echo 1 00:08:40.091 13:22:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:40.091 13:22:45 -- scripts/common.sh@365 -- # decimal 2 00:08:40.091 13:22:45 -- scripts/common.sh@352 -- # local d=2 00:08:40.091 13:22:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.091 13:22:45 -- scripts/common.sh@354 -- # echo 2 00:08:40.091 13:22:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:40.091 13:22:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:40.091 13:22:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:40.091 13:22:45 -- scripts/common.sh@367 -- # return 0 00:08:40.091 13:22:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.091 13:22:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:40.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.091 --rc genhtml_branch_coverage=1 00:08:40.091 --rc genhtml_function_coverage=1 00:08:40.091 --rc genhtml_legend=1 00:08:40.091 --rc geninfo_all_blocks=1 00:08:40.091 --rc geninfo_unexecuted_blocks=1 00:08:40.091 00:08:40.091 ' 00:08:40.091 13:22:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:40.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.091 --rc genhtml_branch_coverage=1 00:08:40.091 --rc genhtml_function_coverage=1 00:08:40.091 --rc genhtml_legend=1 00:08:40.091 --rc geninfo_all_blocks=1 00:08:40.091 --rc geninfo_unexecuted_blocks=1 00:08:40.091 00:08:40.091 ' 00:08:40.091 13:22:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:40.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.091 --rc genhtml_branch_coverage=1 00:08:40.091 --rc genhtml_function_coverage=1 00:08:40.091 --rc genhtml_legend=1 00:08:40.091 --rc geninfo_all_blocks=1 00:08:40.091 --rc geninfo_unexecuted_blocks=1 00:08:40.091 00:08:40.091 ' 00:08:40.091 13:22:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:40.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.091 --rc genhtml_branch_coverage=1 00:08:40.091 --rc genhtml_function_coverage=1 00:08:40.091 --rc genhtml_legend=1 00:08:40.091 --rc geninfo_all_blocks=1 00:08:40.091 --rc geninfo_unexecuted_blocks=1 00:08:40.091 00:08:40.091 ' 00:08:40.092 13:22:45 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:40.092 13:22:45 -- nvmf/common.sh@7 -- # uname -s 00:08:40.092 13:22:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.092 13:22:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.092 13:22:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.092 13:22:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.092 13:22:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.092 13:22:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.092 13:22:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.092 13:22:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.092 13:22:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.092 13:22:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.092 13:22:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:08:40.092 13:22:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:08:40.092 13:22:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.092 13:22:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.092 13:22:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:40.092 13:22:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:40.092 13:22:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.092 13:22:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.092 13:22:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.092 13:22:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.092 13:22:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.092 13:22:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.092 13:22:45 -- paths/export.sh@5 -- # export PATH 00:08:40.092 13:22:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.092 13:22:45 -- nvmf/common.sh@46 -- # : 0 00:08:40.092 13:22:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:40.092 13:22:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:40.092 13:22:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:40.092 13:22:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.092 13:22:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.092 13:22:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:40.092 13:22:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:40.092 13:22:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:40.092 13:22:45 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:40.092 13:22:45 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:40.092 13:22:45 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:40.092 13:22:45 -- target/discovery.sh@15 -- # hash nvme 00:08:40.092 13:22:45 -- target/discovery.sh@20 -- # nvmftestinit 00:08:40.092 13:22:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:40.092 13:22:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:40.092 13:22:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:40.092 13:22:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:40.092 13:22:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:40.092 13:22:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.092 13:22:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:40.092 13:22:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.092 13:22:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:40.092 13:22:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:40.352 13:22:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:40.352 13:22:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:40.352 13:22:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:40.352 13:22:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:40.352 13:22:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:40.352 13:22:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:40.352 13:22:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:40.352 13:22:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:40.352 13:22:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:40.352 13:22:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:40.352 13:22:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:40.352 13:22:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:40.352 13:22:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:40.352 13:22:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:40.352 13:22:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:40.352 13:22:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:40.352 13:22:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:40.352 13:22:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:40.352 Cannot find device "nvmf_tgt_br" 00:08:40.352 13:22:45 -- nvmf/common.sh@154 -- # true 00:08:40.352 13:22:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:40.352 Cannot find device "nvmf_tgt_br2" 00:08:40.352 13:22:45 -- nvmf/common.sh@155 -- # true 00:08:40.352 13:22:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:40.352 13:22:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:40.352 Cannot find device "nvmf_tgt_br" 00:08:40.352 13:22:45 -- nvmf/common.sh@157 -- # true 00:08:40.352 13:22:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:40.352 Cannot find device "nvmf_tgt_br2" 00:08:40.352 13:22:45 -- nvmf/common.sh@158 -- # true 00:08:40.352 13:22:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:40.352 13:22:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:40.352 13:22:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:40.352 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:40.352 13:22:45 -- nvmf/common.sh@161 -- # true 00:08:40.352 13:22:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:40.352 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:40.352 13:22:45 -- nvmf/common.sh@162 -- # true 00:08:40.352 13:22:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:40.352 13:22:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:40.352 13:22:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:40.352 13:22:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:40.352 13:22:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:40.352 13:22:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:40.352 13:22:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:40.352 13:22:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:40.352 13:22:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:40.352 13:22:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:40.352 13:22:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:40.352 13:22:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:40.352 13:22:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:40.352 13:22:46 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:40.352 13:22:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:40.352 13:22:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:40.352 13:22:46 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:40.352 13:22:46 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:40.352 13:22:46 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:40.353 13:22:46 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:40.611 13:22:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:40.611 13:22:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:40.611 13:22:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:40.611 13:22:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:40.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:40.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:08:40.611 00:08:40.611 --- 10.0.0.2 ping statistics --- 00:08:40.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.611 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:08:40.611 13:22:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:40.611 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:40.611 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:08:40.611 00:08:40.611 --- 10.0.0.3 ping statistics --- 00:08:40.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.611 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:08:40.611 13:22:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:40.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:40.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:08:40.611 00:08:40.611 --- 10.0.0.1 ping statistics --- 00:08:40.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.611 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:08:40.611 13:22:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:40.611 13:22:46 -- nvmf/common.sh@421 -- # return 0 00:08:40.611 13:22:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:40.611 13:22:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:40.611 13:22:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:40.611 13:22:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:40.611 13:22:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:40.611 13:22:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:40.611 13:22:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:40.611 13:22:46 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:40.612 13:22:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:40.612 13:22:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:40.612 13:22:46 -- common/autotest_common.sh@10 -- # set +x 00:08:40.612 13:22:46 -- nvmf/common.sh@469 -- # nvmfpid=73381 00:08:40.612 13:22:46 -- nvmf/common.sh@470 -- # waitforlisten 73381 00:08:40.612 13:22:46 -- common/autotest_common.sh@829 -- # '[' -z 73381 ']' 00:08:40.612 13:22:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.612 13:22:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:40.612 13:22:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:40.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.612 13:22:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.612 13:22:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:40.612 13:22:46 -- common/autotest_common.sh@10 -- # set +x 00:08:40.612 [2024-12-15 13:22:46.157262] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:40.612 [2024-12-15 13:22:46.157359] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.612 [2024-12-15 13:22:46.294614] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:40.869 [2024-12-15 13:22:46.356681] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:40.869 [2024-12-15 13:22:46.356847] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.869 [2024-12-15 13:22:46.356859] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.869 [2024-12-15 13:22:46.356882] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.869 [2024-12-15 13:22:46.357051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.869 [2024-12-15 13:22:46.357207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.869 [2024-12-15 13:22:46.357811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.869 [2024-12-15 13:22:46.357820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.800 13:22:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:41.801 13:22:47 -- common/autotest_common.sh@862 -- # return 0 00:08:41.801 13:22:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:41.801 13:22:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:41.801 13:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:41.801 13:22:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.801 13:22:47 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:41.801 13:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.801 13:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:41.801 [2024-12-15 13:22:47.251305] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:41.801 13:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.801 13:22:47 -- target/discovery.sh@26 -- # seq 1 4 00:08:41.801 13:22:47 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:41.801 13:22:47 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:41.801 13:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.801 13:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:41.801 Null1 00:08:41.801 13:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.801 13:22:47 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:41.801 13:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.801 13:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:41.801 13:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.801 13:22:47 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:41.801 13:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.801 13:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:41.801 13:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.801 13:22:47 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:41.801 13:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.801 13:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:41.801 [2024-12-15 13:22:47.312292] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:41.801 13:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.801 13:22:47 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:41.801 13:22:47 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:41.801 13:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.801 13:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:41.801 Null2 00:08:41.801 13:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.801 13:22:47 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:41.801 13:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.801 13:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:41.801 13:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.801 13:22:47 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:41.801 13:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.801 13:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:41.801 13:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.801 13:22:47 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:41.801 13:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.801 13:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:41.801 13:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.801 13:22:47 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:41.801 13:22:47 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:41.801 13:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.801 13:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:41.801 Null3 00:08:41.801 13:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.801 13:22:47 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:41.801 13:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.801 13:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:41.801 13:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.801 13:22:47 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:41.801 13:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.801 13:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:41.801 13:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.801 13:22:47 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:41.801 13:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.801 13:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:41.801 13:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.801 13:22:47 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:41.801 13:22:47 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:41.801 13:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.801 13:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:41.801 Null4 00:08:41.801 13:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.801 13:22:47 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:41.801 13:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.801 13:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:41.801 13:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.801 13:22:47 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:41.801 13:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.801 13:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:41.801 13:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.801 13:22:47 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:41.801 13:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.801 13:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:41.801 13:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.801 13:22:47 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:41.801 13:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.801 13:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:41.801 13:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.801 13:22:47 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:41.801 13:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.801 13:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:41.801 13:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.801 13:22:47 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -a 10.0.0.2 -s 4420 00:08:42.060 00:08:42.060 Discovery Log Number of Records 6, Generation counter 6 00:08:42.060 =====Discovery Log Entry 0====== 00:08:42.060 trtype: tcp 00:08:42.060 adrfam: ipv4 00:08:42.060 subtype: current discovery subsystem 00:08:42.060 treq: not required 00:08:42.060 portid: 0 00:08:42.060 trsvcid: 4420 00:08:42.060 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:42.060 traddr: 10.0.0.2 00:08:42.060 eflags: explicit discovery connections, duplicate discovery information 00:08:42.060 sectype: none 00:08:42.060 =====Discovery Log Entry 1====== 00:08:42.060 trtype: tcp 00:08:42.060 adrfam: ipv4 00:08:42.060 subtype: nvme subsystem 00:08:42.060 treq: not required 00:08:42.060 portid: 0 00:08:42.060 trsvcid: 4420 00:08:42.060 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:42.060 traddr: 10.0.0.2 00:08:42.060 eflags: none 00:08:42.060 sectype: none 00:08:42.060 =====Discovery Log Entry 2====== 00:08:42.060 trtype: tcp 00:08:42.060 adrfam: ipv4 00:08:42.060 subtype: nvme subsystem 00:08:42.060 treq: not required 00:08:42.060 portid: 0 00:08:42.060 trsvcid: 4420 00:08:42.060 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:42.060 traddr: 10.0.0.2 00:08:42.060 eflags: none 00:08:42.060 sectype: none 00:08:42.060 =====Discovery Log Entry 3====== 00:08:42.060 trtype: tcp 00:08:42.060 adrfam: ipv4 00:08:42.060 subtype: nvme subsystem 00:08:42.060 treq: not required 00:08:42.060 portid: 0 00:08:42.060 trsvcid: 4420 00:08:42.060 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:42.060 traddr: 10.0.0.2 00:08:42.060 eflags: none 00:08:42.060 sectype: none 00:08:42.060 =====Discovery Log Entry 4====== 00:08:42.060 trtype: tcp 00:08:42.060 adrfam: ipv4 00:08:42.060 subtype: nvme subsystem 00:08:42.060 treq: not required 00:08:42.060 portid: 0 00:08:42.060 trsvcid: 4420 00:08:42.060 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:42.060 traddr: 10.0.0.2 00:08:42.060 eflags: none 00:08:42.060 sectype: none 00:08:42.060 =====Discovery Log Entry 5====== 00:08:42.060 trtype: tcp 00:08:42.060 adrfam: ipv4 00:08:42.060 subtype: discovery subsystem referral 00:08:42.060 treq: not required 00:08:42.060 portid: 0 00:08:42.060 trsvcid: 4430 00:08:42.060 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:42.060 traddr: 10.0.0.2 00:08:42.060 eflags: none 00:08:42.060 sectype: none 00:08:42.060 Perform nvmf subsystem discovery via RPC 00:08:42.060 13:22:47 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:42.060 13:22:47 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:42.060 13:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.060 13:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:42.060 [2024-12-15 13:22:47.544367] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:42.060 [ 00:08:42.060 { 00:08:42.060 "allow_any_host": true, 00:08:42.060 "hosts": [], 00:08:42.060 "listen_addresses": [ 00:08:42.060 { 00:08:42.060 "adrfam": "IPv4", 00:08:42.060 "traddr": "10.0.0.2", 00:08:42.060 "transport": "TCP", 00:08:42.060 "trsvcid": "4420", 00:08:42.060 "trtype": "TCP" 00:08:42.060 } 00:08:42.060 ], 00:08:42.060 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:42.060 "subtype": "Discovery" 00:08:42.060 }, 00:08:42.060 { 00:08:42.060 "allow_any_host": true, 00:08:42.060 "hosts": [], 00:08:42.060 "listen_addresses": [ 00:08:42.060 { 00:08:42.060 "adrfam": "IPv4", 00:08:42.060 "traddr": "10.0.0.2", 00:08:42.060 "transport": "TCP", 00:08:42.060 "trsvcid": "4420", 00:08:42.060 "trtype": "TCP" 00:08:42.060 } 00:08:42.060 ], 00:08:42.060 "max_cntlid": 65519, 00:08:42.060 "max_namespaces": 32, 00:08:42.060 "min_cntlid": 1, 00:08:42.060 "model_number": "SPDK bdev Controller", 00:08:42.060 "namespaces": [ 00:08:42.060 { 00:08:42.060 "bdev_name": "Null1", 00:08:42.060 "name": "Null1", 00:08:42.060 "nguid": "7CC73C514C4E4B748A25C0381E74A556", 00:08:42.060 "nsid": 1, 00:08:42.060 "uuid": "7cc73c51-4c4e-4b74-8a25-c0381e74a556" 00:08:42.060 } 00:08:42.060 ], 00:08:42.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:42.060 "serial_number": "SPDK00000000000001", 00:08:42.060 "subtype": "NVMe" 00:08:42.060 }, 00:08:42.060 { 00:08:42.060 "allow_any_host": true, 00:08:42.060 "hosts": [], 00:08:42.060 "listen_addresses": [ 00:08:42.060 { 00:08:42.060 "adrfam": "IPv4", 00:08:42.060 "traddr": "10.0.0.2", 00:08:42.060 "transport": "TCP", 00:08:42.060 "trsvcid": "4420", 00:08:42.060 "trtype": "TCP" 00:08:42.060 } 00:08:42.060 ], 00:08:42.060 "max_cntlid": 65519, 00:08:42.060 "max_namespaces": 32, 00:08:42.060 "min_cntlid": 1, 00:08:42.060 "model_number": "SPDK bdev Controller", 00:08:42.060 "namespaces": [ 00:08:42.060 { 00:08:42.060 "bdev_name": "Null2", 00:08:42.060 "name": "Null2", 00:08:42.060 "nguid": "0BBC439E2BC0482D952FCB668428DB59", 00:08:42.060 "nsid": 1, 00:08:42.060 "uuid": "0bbc439e-2bc0-482d-952f-cb668428db59" 00:08:42.060 } 00:08:42.060 ], 00:08:42.060 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:42.060 "serial_number": "SPDK00000000000002", 00:08:42.060 "subtype": "NVMe" 00:08:42.060 }, 00:08:42.060 { 00:08:42.060 "allow_any_host": true, 00:08:42.060 "hosts": [], 00:08:42.060 "listen_addresses": [ 00:08:42.060 { 00:08:42.060 "adrfam": "IPv4", 00:08:42.060 "traddr": "10.0.0.2", 00:08:42.060 "transport": "TCP", 00:08:42.060 "trsvcid": "4420", 00:08:42.060 "trtype": "TCP" 00:08:42.060 } 00:08:42.060 ], 00:08:42.060 "max_cntlid": 65519, 00:08:42.060 "max_namespaces": 32, 00:08:42.060 "min_cntlid": 1, 00:08:42.060 "model_number": "SPDK bdev Controller", 00:08:42.060 "namespaces": [ 00:08:42.060 { 00:08:42.060 "bdev_name": "Null3", 00:08:42.060 "name": "Null3", 00:08:42.060 "nguid": "70A271EC11EE472995E2125A80396036", 00:08:42.060 "nsid": 1, 00:08:42.060 "uuid": "70a271ec-11ee-4729-95e2-125a80396036" 00:08:42.060 } 00:08:42.060 ], 00:08:42.060 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:42.060 "serial_number": "SPDK00000000000003", 00:08:42.060 "subtype": "NVMe" 00:08:42.060 }, 00:08:42.060 { 00:08:42.060 "allow_any_host": true, 00:08:42.060 "hosts": [], 00:08:42.060 "listen_addresses": [ 00:08:42.060 { 00:08:42.060 "adrfam": "IPv4", 00:08:42.060 "traddr": "10.0.0.2", 00:08:42.060 "transport": "TCP", 00:08:42.060 "trsvcid": "4420", 00:08:42.060 "trtype": "TCP" 00:08:42.060 } 00:08:42.060 ], 00:08:42.060 "max_cntlid": 65519, 00:08:42.060 "max_namespaces": 32, 00:08:42.060 "min_cntlid": 1, 00:08:42.060 "model_number": "SPDK bdev Controller", 00:08:42.060 "namespaces": [ 00:08:42.060 { 00:08:42.060 "bdev_name": "Null4", 00:08:42.060 "name": "Null4", 00:08:42.060 "nguid": "8FED336EF0604D159E230563E9180B14", 00:08:42.060 "nsid": 1, 00:08:42.060 "uuid": "8fed336e-f060-4d15-9e23-0563e9180b14" 00:08:42.060 } 00:08:42.060 ], 00:08:42.060 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:42.060 "serial_number": "SPDK00000000000004", 00:08:42.060 "subtype": "NVMe" 00:08:42.060 } 00:08:42.060 ] 00:08:42.060 13:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.060 13:22:47 -- target/discovery.sh@42 -- # seq 1 4 00:08:42.060 13:22:47 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:42.060 13:22:47 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:42.060 13:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.060 13:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:42.060 13:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.060 13:22:47 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:42.060 13:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.060 13:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:42.060 13:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.060 13:22:47 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:42.060 13:22:47 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:42.060 13:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.060 13:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:42.060 13:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.060 13:22:47 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:42.060 13:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.060 13:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:42.060 13:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.060 13:22:47 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:42.060 13:22:47 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:42.060 13:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.060 13:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:42.060 13:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.060 13:22:47 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:42.060 13:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.060 13:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:42.060 13:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.060 13:22:47 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:42.060 13:22:47 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:42.060 13:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.061 13:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:42.061 13:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.061 13:22:47 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:42.061 13:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.061 13:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:42.061 13:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.061 13:22:47 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:42.061 13:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.061 13:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:42.061 13:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.061 13:22:47 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:42.061 13:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.061 13:22:47 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:42.061 13:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:42.061 13:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.061 13:22:47 -- target/discovery.sh@49 -- # check_bdevs= 00:08:42.061 13:22:47 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:42.061 13:22:47 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:42.061 13:22:47 -- target/discovery.sh@57 -- # nvmftestfini 00:08:42.061 13:22:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:42.061 13:22:47 -- nvmf/common.sh@116 -- # sync 00:08:42.061 13:22:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:42.061 13:22:47 -- nvmf/common.sh@119 -- # set +e 00:08:42.061 13:22:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:42.061 13:22:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:42.061 rmmod nvme_tcp 00:08:42.319 rmmod nvme_fabrics 00:08:42.319 rmmod nvme_keyring 00:08:42.319 13:22:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:42.319 13:22:47 -- nvmf/common.sh@123 -- # set -e 00:08:42.319 13:22:47 -- nvmf/common.sh@124 -- # return 0 00:08:42.319 13:22:47 -- nvmf/common.sh@477 -- # '[' -n 73381 ']' 00:08:42.319 13:22:47 -- nvmf/common.sh@478 -- # killprocess 73381 00:08:42.319 13:22:47 -- common/autotest_common.sh@936 -- # '[' -z 73381 ']' 00:08:42.319 13:22:47 -- common/autotest_common.sh@940 -- # kill -0 73381 00:08:42.319 13:22:47 -- common/autotest_common.sh@941 -- # uname 00:08:42.319 13:22:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:42.319 13:22:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73381 00:08:42.319 13:22:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:42.319 13:22:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:42.319 killing process with pid 73381 00:08:42.319 13:22:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73381' 00:08:42.319 13:22:47 -- common/autotest_common.sh@955 -- # kill 73381 00:08:42.319 [2024-12-15 13:22:47.824810] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:42.319 13:22:47 -- common/autotest_common.sh@960 -- # wait 73381 00:08:42.577 13:22:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:42.577 13:22:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:42.577 13:22:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:42.577 13:22:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:42.577 13:22:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:42.577 13:22:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.577 13:22:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.577 13:22:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.577 13:22:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:42.577 00:08:42.577 real 0m2.479s 00:08:42.577 user 0m7.042s 00:08:42.577 sys 0m0.666s 00:08:42.577 13:22:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:42.577 13:22:48 -- common/autotest_common.sh@10 -- # set +x 00:08:42.577 ************************************ 00:08:42.577 END TEST nvmf_discovery 00:08:42.577 ************************************ 00:08:42.577 13:22:48 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:42.577 13:22:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:42.577 13:22:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:42.577 13:22:48 -- common/autotest_common.sh@10 -- # set +x 00:08:42.577 ************************************ 00:08:42.577 START TEST nvmf_referrals 00:08:42.577 ************************************ 00:08:42.577 13:22:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:42.577 * Looking for test storage... 00:08:42.577 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:42.577 13:22:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:42.577 13:22:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:42.577 13:22:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:42.835 13:22:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:42.835 13:22:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:42.835 13:22:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:42.835 13:22:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:42.835 13:22:48 -- scripts/common.sh@335 -- # IFS=.-: 00:08:42.835 13:22:48 -- scripts/common.sh@335 -- # read -ra ver1 00:08:42.835 13:22:48 -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.835 13:22:48 -- scripts/common.sh@336 -- # read -ra ver2 00:08:42.835 13:22:48 -- scripts/common.sh@337 -- # local 'op=<' 00:08:42.835 13:22:48 -- scripts/common.sh@339 -- # ver1_l=2 00:08:42.835 13:22:48 -- scripts/common.sh@340 -- # ver2_l=1 00:08:42.835 13:22:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:42.835 13:22:48 -- scripts/common.sh@343 -- # case "$op" in 00:08:42.835 13:22:48 -- scripts/common.sh@344 -- # : 1 00:08:42.835 13:22:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:42.835 13:22:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.836 13:22:48 -- scripts/common.sh@364 -- # decimal 1 00:08:42.836 13:22:48 -- scripts/common.sh@352 -- # local d=1 00:08:42.836 13:22:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.836 13:22:48 -- scripts/common.sh@354 -- # echo 1 00:08:42.836 13:22:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:42.836 13:22:48 -- scripts/common.sh@365 -- # decimal 2 00:08:42.836 13:22:48 -- scripts/common.sh@352 -- # local d=2 00:08:42.836 13:22:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.836 13:22:48 -- scripts/common.sh@354 -- # echo 2 00:08:42.836 13:22:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:42.836 13:22:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:42.836 13:22:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:42.836 13:22:48 -- scripts/common.sh@367 -- # return 0 00:08:42.836 13:22:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.836 13:22:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:42.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.836 --rc genhtml_branch_coverage=1 00:08:42.836 --rc genhtml_function_coverage=1 00:08:42.836 --rc genhtml_legend=1 00:08:42.836 --rc geninfo_all_blocks=1 00:08:42.836 --rc geninfo_unexecuted_blocks=1 00:08:42.836 00:08:42.836 ' 00:08:42.836 13:22:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:42.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.836 --rc genhtml_branch_coverage=1 00:08:42.836 --rc genhtml_function_coverage=1 00:08:42.836 --rc genhtml_legend=1 00:08:42.836 --rc geninfo_all_blocks=1 00:08:42.836 --rc geninfo_unexecuted_blocks=1 00:08:42.836 00:08:42.836 ' 00:08:42.836 13:22:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:42.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.836 --rc genhtml_branch_coverage=1 00:08:42.836 --rc genhtml_function_coverage=1 00:08:42.836 --rc genhtml_legend=1 00:08:42.836 --rc geninfo_all_blocks=1 00:08:42.836 --rc geninfo_unexecuted_blocks=1 00:08:42.836 00:08:42.836 ' 00:08:42.836 13:22:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:42.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.836 --rc genhtml_branch_coverage=1 00:08:42.836 --rc genhtml_function_coverage=1 00:08:42.836 --rc genhtml_legend=1 00:08:42.836 --rc geninfo_all_blocks=1 00:08:42.836 --rc geninfo_unexecuted_blocks=1 00:08:42.836 00:08:42.836 ' 00:08:42.836 13:22:48 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:42.836 13:22:48 -- nvmf/common.sh@7 -- # uname -s 00:08:42.836 13:22:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.836 13:22:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.836 13:22:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.836 13:22:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.836 13:22:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.836 13:22:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.836 13:22:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.836 13:22:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.836 13:22:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.836 13:22:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.836 13:22:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:08:42.836 13:22:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:08:42.836 13:22:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.836 13:22:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.836 13:22:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:42.836 13:22:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:42.836 13:22:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.836 13:22:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.836 13:22:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.836 13:22:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.836 13:22:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.836 13:22:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.836 13:22:48 -- paths/export.sh@5 -- # export PATH 00:08:42.836 13:22:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.836 13:22:48 -- nvmf/common.sh@46 -- # : 0 00:08:42.836 13:22:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:42.836 13:22:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:42.836 13:22:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:42.836 13:22:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.836 13:22:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.836 13:22:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:42.836 13:22:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:42.836 13:22:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:42.836 13:22:48 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:42.836 13:22:48 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:42.836 13:22:48 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:42.836 13:22:48 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:42.836 13:22:48 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:42.836 13:22:48 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:42.836 13:22:48 -- target/referrals.sh@37 -- # nvmftestinit 00:08:42.836 13:22:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:42.836 13:22:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.836 13:22:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:42.836 13:22:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:42.836 13:22:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:42.836 13:22:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.836 13:22:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.836 13:22:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.836 13:22:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:42.836 13:22:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:42.836 13:22:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:42.836 13:22:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:42.836 13:22:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:42.836 13:22:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:42.836 13:22:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:42.836 13:22:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:42.836 13:22:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:42.836 13:22:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:42.836 13:22:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:42.836 13:22:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:42.836 13:22:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:42.836 13:22:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:42.836 13:22:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:42.836 13:22:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:42.836 13:22:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:42.836 13:22:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:42.836 13:22:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:42.836 13:22:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:42.836 Cannot find device "nvmf_tgt_br" 00:08:42.836 13:22:48 -- nvmf/common.sh@154 -- # true 00:08:42.836 13:22:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:42.836 Cannot find device "nvmf_tgt_br2" 00:08:42.836 13:22:48 -- nvmf/common.sh@155 -- # true 00:08:42.836 13:22:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:42.836 13:22:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:42.836 Cannot find device "nvmf_tgt_br" 00:08:42.836 13:22:48 -- nvmf/common.sh@157 -- # true 00:08:42.836 13:22:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:42.836 Cannot find device "nvmf_tgt_br2" 00:08:42.836 13:22:48 -- nvmf/common.sh@158 -- # true 00:08:42.836 13:22:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:42.836 13:22:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:42.836 13:22:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:42.836 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:42.836 13:22:48 -- nvmf/common.sh@161 -- # true 00:08:42.836 13:22:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:42.836 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:42.836 13:22:48 -- nvmf/common.sh@162 -- # true 00:08:42.836 13:22:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:42.836 13:22:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:42.836 13:22:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:42.836 13:22:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:42.836 13:22:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:42.836 13:22:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:43.095 13:22:48 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:43.095 13:22:48 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:43.095 13:22:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:43.095 13:22:48 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:43.095 13:22:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:43.095 13:22:48 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:43.095 13:22:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:43.095 13:22:48 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:43.095 13:22:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:43.095 13:22:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:43.095 13:22:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:43.095 13:22:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:43.095 13:22:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:43.095 13:22:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:43.095 13:22:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:43.095 13:22:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:43.095 13:22:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:43.095 13:22:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:43.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:43.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:08:43.095 00:08:43.095 --- 10.0.0.2 ping statistics --- 00:08:43.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.095 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:08:43.095 13:22:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:43.095 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:43.095 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:08:43.095 00:08:43.095 --- 10.0.0.3 ping statistics --- 00:08:43.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.095 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:08:43.095 13:22:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:43.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:43.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:08:43.095 00:08:43.095 --- 10.0.0.1 ping statistics --- 00:08:43.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.095 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:08:43.095 13:22:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:43.095 13:22:48 -- nvmf/common.sh@421 -- # return 0 00:08:43.095 13:22:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:43.095 13:22:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:43.095 13:22:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:43.095 13:22:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:43.095 13:22:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:43.095 13:22:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:43.095 13:22:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:43.095 13:22:48 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:43.095 13:22:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:43.095 13:22:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:43.095 13:22:48 -- common/autotest_common.sh@10 -- # set +x 00:08:43.095 13:22:48 -- nvmf/common.sh@469 -- # nvmfpid=73623 00:08:43.095 13:22:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:43.095 13:22:48 -- nvmf/common.sh@470 -- # waitforlisten 73623 00:08:43.095 13:22:48 -- common/autotest_common.sh@829 -- # '[' -z 73623 ']' 00:08:43.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.095 13:22:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.095 13:22:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:43.095 13:22:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.095 13:22:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:43.095 13:22:48 -- common/autotest_common.sh@10 -- # set +x 00:08:43.095 [2024-12-15 13:22:48.725570] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:43.095 [2024-12-15 13:22:48.725670] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.353 [2024-12-15 13:22:48.868194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:43.353 [2024-12-15 13:22:48.934624] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:43.353 [2024-12-15 13:22:48.934817] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.353 [2024-12-15 13:22:48.934830] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.353 [2024-12-15 13:22:48.934837] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.353 [2024-12-15 13:22:48.934975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.353 [2024-12-15 13:22:48.935357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.353 [2024-12-15 13:22:48.935798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.353 [2024-12-15 13:22:48.935795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:44.289 13:22:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:44.289 13:22:49 -- common/autotest_common.sh@862 -- # return 0 00:08:44.289 13:22:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:44.289 13:22:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:44.289 13:22:49 -- common/autotest_common.sh@10 -- # set +x 00:08:44.289 13:22:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:44.289 13:22:49 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:44.289 13:22:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.289 13:22:49 -- common/autotest_common.sh@10 -- # set +x 00:08:44.289 [2024-12-15 13:22:49.751723] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:44.289 13:22:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.289 13:22:49 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:44.289 13:22:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.290 13:22:49 -- common/autotest_common.sh@10 -- # set +x 00:08:44.290 [2024-12-15 13:22:49.771924] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:44.290 13:22:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.290 13:22:49 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:44.290 13:22:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.290 13:22:49 -- common/autotest_common.sh@10 -- # set +x 00:08:44.290 13:22:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.290 13:22:49 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:44.290 13:22:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.290 13:22:49 -- common/autotest_common.sh@10 -- # set +x 00:08:44.290 13:22:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.290 13:22:49 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:44.290 13:22:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.290 13:22:49 -- common/autotest_common.sh@10 -- # set +x 00:08:44.290 13:22:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.290 13:22:49 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:44.290 13:22:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.290 13:22:49 -- target/referrals.sh@48 -- # jq length 00:08:44.290 13:22:49 -- common/autotest_common.sh@10 -- # set +x 00:08:44.290 13:22:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.290 13:22:49 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:44.290 13:22:49 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:44.290 13:22:49 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:44.290 13:22:49 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:44.290 13:22:49 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:44.290 13:22:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.290 13:22:49 -- target/referrals.sh@21 -- # sort 00:08:44.290 13:22:49 -- common/autotest_common.sh@10 -- # set +x 00:08:44.290 13:22:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.290 13:22:49 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:44.290 13:22:49 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:44.290 13:22:49 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:44.290 13:22:49 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:44.290 13:22:49 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:44.290 13:22:49 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:44.290 13:22:49 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:44.290 13:22:49 -- target/referrals.sh@26 -- # sort 00:08:44.553 13:22:50 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:44.553 13:22:50 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:44.553 13:22:50 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:44.553 13:22:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.553 13:22:50 -- common/autotest_common.sh@10 -- # set +x 00:08:44.553 13:22:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.553 13:22:50 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:44.553 13:22:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.553 13:22:50 -- common/autotest_common.sh@10 -- # set +x 00:08:44.553 13:22:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.553 13:22:50 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:44.553 13:22:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.553 13:22:50 -- common/autotest_common.sh@10 -- # set +x 00:08:44.553 13:22:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.553 13:22:50 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:44.553 13:22:50 -- target/referrals.sh@56 -- # jq length 00:08:44.553 13:22:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.553 13:22:50 -- common/autotest_common.sh@10 -- # set +x 00:08:44.553 13:22:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.553 13:22:50 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:44.553 13:22:50 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:44.553 13:22:50 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:44.553 13:22:50 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:44.553 13:22:50 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:44.553 13:22:50 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:44.553 13:22:50 -- target/referrals.sh@26 -- # sort 00:08:44.811 13:22:50 -- target/referrals.sh@26 -- # echo 00:08:44.811 13:22:50 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:44.811 13:22:50 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:44.811 13:22:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.811 13:22:50 -- common/autotest_common.sh@10 -- # set +x 00:08:44.811 13:22:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.811 13:22:50 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:44.811 13:22:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.811 13:22:50 -- common/autotest_common.sh@10 -- # set +x 00:08:44.811 13:22:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.811 13:22:50 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:44.811 13:22:50 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:44.811 13:22:50 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:44.811 13:22:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.811 13:22:50 -- common/autotest_common.sh@10 -- # set +x 00:08:44.811 13:22:50 -- target/referrals.sh@21 -- # sort 00:08:44.811 13:22:50 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:44.811 13:22:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.811 13:22:50 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:44.811 13:22:50 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:44.811 13:22:50 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:44.811 13:22:50 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:44.811 13:22:50 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:44.811 13:22:50 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:44.811 13:22:50 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:44.811 13:22:50 -- target/referrals.sh@26 -- # sort 00:08:44.811 13:22:50 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:44.811 13:22:50 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:44.811 13:22:50 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:44.811 13:22:50 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:44.811 13:22:50 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:44.811 13:22:50 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:44.811 13:22:50 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:45.069 13:22:50 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:45.069 13:22:50 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:45.069 13:22:50 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:45.069 13:22:50 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:45.069 13:22:50 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:45.069 13:22:50 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:45.069 13:22:50 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:45.069 13:22:50 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:45.069 13:22:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.069 13:22:50 -- common/autotest_common.sh@10 -- # set +x 00:08:45.069 13:22:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.069 13:22:50 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:45.069 13:22:50 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:45.069 13:22:50 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:45.069 13:22:50 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:45.069 13:22:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.069 13:22:50 -- common/autotest_common.sh@10 -- # set +x 00:08:45.069 13:22:50 -- target/referrals.sh@21 -- # sort 00:08:45.069 13:22:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.069 13:22:50 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:45.069 13:22:50 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:45.069 13:22:50 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:45.069 13:22:50 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:45.069 13:22:50 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:45.327 13:22:50 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:45.327 13:22:50 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:45.327 13:22:50 -- target/referrals.sh@26 -- # sort 00:08:45.327 13:22:50 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:45.327 13:22:50 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:45.327 13:22:50 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:45.327 13:22:50 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:45.327 13:22:50 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:45.327 13:22:50 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:45.327 13:22:50 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:45.327 13:22:50 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:45.327 13:22:50 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:45.327 13:22:50 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:45.327 13:22:50 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:45.327 13:22:50 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:45.327 13:22:50 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:45.585 13:22:51 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:45.585 13:22:51 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:45.585 13:22:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.585 13:22:51 -- common/autotest_common.sh@10 -- # set +x 00:08:45.585 13:22:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.585 13:22:51 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:45.585 13:22:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.585 13:22:51 -- target/referrals.sh@82 -- # jq length 00:08:45.585 13:22:51 -- common/autotest_common.sh@10 -- # set +x 00:08:45.586 13:22:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.586 13:22:51 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:45.586 13:22:51 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:45.586 13:22:51 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:45.586 13:22:51 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:45.586 13:22:51 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:45.586 13:22:51 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:45.586 13:22:51 -- target/referrals.sh@26 -- # sort 00:08:45.843 13:22:51 -- target/referrals.sh@26 -- # echo 00:08:45.843 13:22:51 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:45.843 13:22:51 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:45.843 13:22:51 -- target/referrals.sh@86 -- # nvmftestfini 00:08:45.843 13:22:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:45.843 13:22:51 -- nvmf/common.sh@116 -- # sync 00:08:45.843 13:22:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:45.843 13:22:51 -- nvmf/common.sh@119 -- # set +e 00:08:45.843 13:22:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:45.843 13:22:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:45.843 rmmod nvme_tcp 00:08:45.843 rmmod nvme_fabrics 00:08:45.843 rmmod nvme_keyring 00:08:45.843 13:22:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:45.843 13:22:51 -- nvmf/common.sh@123 -- # set -e 00:08:45.843 13:22:51 -- nvmf/common.sh@124 -- # return 0 00:08:45.843 13:22:51 -- nvmf/common.sh@477 -- # '[' -n 73623 ']' 00:08:45.843 13:22:51 -- nvmf/common.sh@478 -- # killprocess 73623 00:08:45.843 13:22:51 -- common/autotest_common.sh@936 -- # '[' -z 73623 ']' 00:08:45.843 13:22:51 -- common/autotest_common.sh@940 -- # kill -0 73623 00:08:45.843 13:22:51 -- common/autotest_common.sh@941 -- # uname 00:08:45.843 13:22:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:45.843 13:22:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73623 00:08:45.843 13:22:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:45.843 13:22:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:45.843 killing process with pid 73623 00:08:45.843 13:22:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73623' 00:08:45.843 13:22:51 -- common/autotest_common.sh@955 -- # kill 73623 00:08:45.843 13:22:51 -- common/autotest_common.sh@960 -- # wait 73623 00:08:46.101 13:22:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:46.101 13:22:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:46.101 13:22:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:46.101 13:22:51 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:46.101 13:22:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:46.101 13:22:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.101 13:22:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:46.101 13:22:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.101 13:22:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:46.101 00:08:46.101 real 0m3.579s 00:08:46.101 user 0m11.945s 00:08:46.101 sys 0m0.933s 00:08:46.101 13:22:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:46.101 13:22:51 -- common/autotest_common.sh@10 -- # set +x 00:08:46.101 ************************************ 00:08:46.101 END TEST nvmf_referrals 00:08:46.101 ************************************ 00:08:46.101 13:22:51 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:46.101 13:22:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:46.101 13:22:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:46.101 13:22:51 -- common/autotest_common.sh@10 -- # set +x 00:08:46.101 ************************************ 00:08:46.101 START TEST nvmf_connect_disconnect 00:08:46.101 ************************************ 00:08:46.101 13:22:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:46.360 * Looking for test storage... 00:08:46.360 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:46.360 13:22:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:46.360 13:22:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:46.360 13:22:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:46.360 13:22:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:46.360 13:22:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:46.360 13:22:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:46.360 13:22:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:46.360 13:22:51 -- scripts/common.sh@335 -- # IFS=.-: 00:08:46.360 13:22:51 -- scripts/common.sh@335 -- # read -ra ver1 00:08:46.360 13:22:51 -- scripts/common.sh@336 -- # IFS=.-: 00:08:46.360 13:22:51 -- scripts/common.sh@336 -- # read -ra ver2 00:08:46.360 13:22:51 -- scripts/common.sh@337 -- # local 'op=<' 00:08:46.360 13:22:51 -- scripts/common.sh@339 -- # ver1_l=2 00:08:46.360 13:22:51 -- scripts/common.sh@340 -- # ver2_l=1 00:08:46.360 13:22:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:46.360 13:22:51 -- scripts/common.sh@343 -- # case "$op" in 00:08:46.360 13:22:51 -- scripts/common.sh@344 -- # : 1 00:08:46.360 13:22:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:46.360 13:22:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:46.360 13:22:51 -- scripts/common.sh@364 -- # decimal 1 00:08:46.360 13:22:51 -- scripts/common.sh@352 -- # local d=1 00:08:46.360 13:22:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:46.360 13:22:51 -- scripts/common.sh@354 -- # echo 1 00:08:46.360 13:22:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:46.360 13:22:51 -- scripts/common.sh@365 -- # decimal 2 00:08:46.360 13:22:51 -- scripts/common.sh@352 -- # local d=2 00:08:46.360 13:22:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:46.360 13:22:51 -- scripts/common.sh@354 -- # echo 2 00:08:46.360 13:22:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:46.360 13:22:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:46.360 13:22:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:46.360 13:22:51 -- scripts/common.sh@367 -- # return 0 00:08:46.360 13:22:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:46.360 13:22:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:46.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.360 --rc genhtml_branch_coverage=1 00:08:46.360 --rc genhtml_function_coverage=1 00:08:46.360 --rc genhtml_legend=1 00:08:46.360 --rc geninfo_all_blocks=1 00:08:46.360 --rc geninfo_unexecuted_blocks=1 00:08:46.360 00:08:46.360 ' 00:08:46.360 13:22:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:46.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.360 --rc genhtml_branch_coverage=1 00:08:46.360 --rc genhtml_function_coverage=1 00:08:46.360 --rc genhtml_legend=1 00:08:46.360 --rc geninfo_all_blocks=1 00:08:46.360 --rc geninfo_unexecuted_blocks=1 00:08:46.360 00:08:46.360 ' 00:08:46.361 13:22:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:46.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.361 --rc genhtml_branch_coverage=1 00:08:46.361 --rc genhtml_function_coverage=1 00:08:46.361 --rc genhtml_legend=1 00:08:46.361 --rc geninfo_all_blocks=1 00:08:46.361 --rc geninfo_unexecuted_blocks=1 00:08:46.361 00:08:46.361 ' 00:08:46.361 13:22:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:46.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.361 --rc genhtml_branch_coverage=1 00:08:46.361 --rc genhtml_function_coverage=1 00:08:46.361 --rc genhtml_legend=1 00:08:46.361 --rc geninfo_all_blocks=1 00:08:46.361 --rc geninfo_unexecuted_blocks=1 00:08:46.361 00:08:46.361 ' 00:08:46.361 13:22:51 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:46.361 13:22:51 -- nvmf/common.sh@7 -- # uname -s 00:08:46.361 13:22:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:46.361 13:22:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:46.361 13:22:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:46.361 13:22:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:46.361 13:22:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:46.361 13:22:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:46.361 13:22:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:46.361 13:22:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:46.361 13:22:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:46.361 13:22:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:46.361 13:22:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:08:46.361 13:22:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:08:46.361 13:22:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:46.361 13:22:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:46.361 13:22:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:46.361 13:22:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:46.361 13:22:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.361 13:22:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.361 13:22:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.361 13:22:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.361 13:22:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.361 13:22:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.361 13:22:51 -- paths/export.sh@5 -- # export PATH 00:08:46.361 13:22:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.361 13:22:51 -- nvmf/common.sh@46 -- # : 0 00:08:46.361 13:22:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:46.361 13:22:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:46.361 13:22:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:46.361 13:22:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:46.361 13:22:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:46.361 13:22:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:46.361 13:22:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:46.361 13:22:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:46.361 13:22:51 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:46.361 13:22:51 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:46.361 13:22:51 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:46.361 13:22:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:46.361 13:22:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:46.361 13:22:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:46.361 13:22:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:46.361 13:22:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:46.361 13:22:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.361 13:22:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:46.361 13:22:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.361 13:22:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:46.361 13:22:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:46.361 13:22:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:46.361 13:22:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:46.361 13:22:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:46.361 13:22:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:46.361 13:22:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:46.361 13:22:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:46.361 13:22:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:46.361 13:22:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:46.361 13:22:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:46.361 13:22:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:46.361 13:22:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:46.361 13:22:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:46.361 13:22:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:46.361 13:22:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:46.361 13:22:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:46.361 13:22:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:46.361 13:22:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:46.361 13:22:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:46.361 Cannot find device "nvmf_tgt_br" 00:08:46.361 13:22:51 -- nvmf/common.sh@154 -- # true 00:08:46.361 13:22:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:46.361 Cannot find device "nvmf_tgt_br2" 00:08:46.361 13:22:51 -- nvmf/common.sh@155 -- # true 00:08:46.361 13:22:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:46.361 13:22:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:46.361 Cannot find device "nvmf_tgt_br" 00:08:46.361 13:22:52 -- nvmf/common.sh@157 -- # true 00:08:46.361 13:22:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:46.361 Cannot find device "nvmf_tgt_br2" 00:08:46.361 13:22:52 -- nvmf/common.sh@158 -- # true 00:08:46.361 13:22:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:46.620 13:22:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:46.620 13:22:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:46.620 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:46.620 13:22:52 -- nvmf/common.sh@161 -- # true 00:08:46.620 13:22:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:46.620 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:46.620 13:22:52 -- nvmf/common.sh@162 -- # true 00:08:46.620 13:22:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:46.620 13:22:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:46.620 13:22:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:46.620 13:22:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:46.620 13:22:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:46.620 13:22:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:46.620 13:22:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:46.620 13:22:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:46.620 13:22:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:46.620 13:22:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:46.620 13:22:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:46.620 13:22:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:46.620 13:22:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:46.620 13:22:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:46.620 13:22:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:46.620 13:22:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:46.620 13:22:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:46.620 13:22:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:46.620 13:22:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:46.620 13:22:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:46.620 13:22:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:46.620 13:22:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:46.620 13:22:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:46.620 13:22:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:46.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:46.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:08:46.620 00:08:46.620 --- 10.0.0.2 ping statistics --- 00:08:46.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.620 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:08:46.620 13:22:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:46.620 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:46.620 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:08:46.620 00:08:46.620 --- 10.0.0.3 ping statistics --- 00:08:46.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.620 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:46.620 13:22:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:46.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:46.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:08:46.620 00:08:46.620 --- 10.0.0.1 ping statistics --- 00:08:46.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.620 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:08:46.620 13:22:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:46.620 13:22:52 -- nvmf/common.sh@421 -- # return 0 00:08:46.620 13:22:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:46.620 13:22:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:46.620 13:22:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:46.620 13:22:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:46.620 13:22:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:46.620 13:22:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:46.620 13:22:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:46.620 13:22:52 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:46.620 13:22:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:46.620 13:22:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:46.620 13:22:52 -- common/autotest_common.sh@10 -- # set +x 00:08:46.620 13:22:52 -- nvmf/common.sh@469 -- # nvmfpid=73932 00:08:46.620 13:22:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:46.620 13:22:52 -- nvmf/common.sh@470 -- # waitforlisten 73932 00:08:46.620 13:22:52 -- common/autotest_common.sh@829 -- # '[' -z 73932 ']' 00:08:46.620 13:22:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.620 13:22:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:46.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.620 13:22:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.620 13:22:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:46.620 13:22:52 -- common/autotest_common.sh@10 -- # set +x 00:08:46.878 [2024-12-15 13:22:52.356279] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:46.878 [2024-12-15 13:22:52.356375] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.878 [2024-12-15 13:22:52.498201] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:46.878 [2024-12-15 13:22:52.564009] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:46.878 [2024-12-15 13:22:52.564213] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.878 [2024-12-15 13:22:52.564240] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.878 [2024-12-15 13:22:52.564248] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.878 [2024-12-15 13:22:52.564369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.136 [2024-12-15 13:22:52.564893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:47.136 [2024-12-15 13:22:52.565173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:47.136 [2024-12-15 13:22:52.565209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.702 13:22:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:47.702 13:22:53 -- common/autotest_common.sh@862 -- # return 0 00:08:47.959 13:22:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:47.959 13:22:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:47.959 13:22:53 -- common/autotest_common.sh@10 -- # set +x 00:08:47.959 13:22:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.959 13:22:53 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:47.959 13:22:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.959 13:22:53 -- common/autotest_common.sh@10 -- # set +x 00:08:47.959 [2024-12-15 13:22:53.442647] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:47.959 13:22:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.959 13:22:53 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:47.959 13:22:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.959 13:22:53 -- common/autotest_common.sh@10 -- # set +x 00:08:47.959 13:22:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.959 13:22:53 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:47.959 13:22:53 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:47.959 13:22:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.960 13:22:53 -- common/autotest_common.sh@10 -- # set +x 00:08:47.960 13:22:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.960 13:22:53 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:47.960 13:22:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.960 13:22:53 -- common/autotest_common.sh@10 -- # set +x 00:08:47.960 13:22:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.960 13:22:53 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:47.960 13:22:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.960 13:22:53 -- common/autotest_common.sh@10 -- # set +x 00:08:47.960 [2024-12-15 13:22:53.508089] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.960 13:22:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.960 13:22:53 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:47.960 13:22:53 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:47.960 13:22:53 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:47.960 13:22:53 -- target/connect_disconnect.sh@34 -- # set +x 00:08:50.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.599 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.363 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.836 13:26:37 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:31.836 13:26:37 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:31.836 13:26:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:31.836 13:26:37 -- nvmf/common.sh@116 -- # sync 00:12:31.836 13:26:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:31.836 13:26:37 -- nvmf/common.sh@119 -- # set +e 00:12:31.836 13:26:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:31.836 13:26:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:31.836 rmmod nvme_tcp 00:12:31.836 rmmod nvme_fabrics 00:12:31.836 rmmod nvme_keyring 00:12:31.836 13:26:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:31.836 13:26:37 -- nvmf/common.sh@123 -- # set -e 00:12:31.837 13:26:37 -- nvmf/common.sh@124 -- # return 0 00:12:31.837 13:26:37 -- nvmf/common.sh@477 -- # '[' -n 73932 ']' 00:12:31.837 13:26:37 -- nvmf/common.sh@478 -- # killprocess 73932 00:12:31.837 13:26:37 -- common/autotest_common.sh@936 -- # '[' -z 73932 ']' 00:12:31.837 13:26:37 -- common/autotest_common.sh@940 -- # kill -0 73932 00:12:31.837 13:26:37 -- common/autotest_common.sh@941 -- # uname 00:12:31.837 13:26:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:31.837 13:26:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73932 00:12:31.837 killing process with pid 73932 00:12:31.837 13:26:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:31.837 13:26:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:31.837 13:26:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73932' 00:12:31.837 13:26:37 -- common/autotest_common.sh@955 -- # kill 73932 00:12:31.837 13:26:37 -- common/autotest_common.sh@960 -- # wait 73932 00:12:32.113 13:26:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:32.113 13:26:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:32.113 13:26:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:32.113 13:26:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:32.113 13:26:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:32.113 13:26:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.113 13:26:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:32.113 13:26:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.113 13:26:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:32.113 00:12:32.113 real 3m45.939s 00:12:32.113 user 14m37.763s 00:12:32.113 sys 0m25.382s 00:12:32.113 ************************************ 00:12:32.113 END TEST nvmf_connect_disconnect 00:12:32.113 ************************************ 00:12:32.113 13:26:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:32.113 13:26:37 -- common/autotest_common.sh@10 -- # set +x 00:12:32.113 13:26:37 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:32.113 13:26:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:32.113 13:26:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:32.113 13:26:37 -- common/autotest_common.sh@10 -- # set +x 00:12:32.113 ************************************ 00:12:32.113 START TEST nvmf_multitarget 00:12:32.113 ************************************ 00:12:32.113 13:26:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:32.376 * Looking for test storage... 00:12:32.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:32.376 13:26:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:32.376 13:26:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:32.376 13:26:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:32.376 13:26:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:32.376 13:26:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:32.376 13:26:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:32.376 13:26:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:32.376 13:26:37 -- scripts/common.sh@335 -- # IFS=.-: 00:12:32.376 13:26:37 -- scripts/common.sh@335 -- # read -ra ver1 00:12:32.376 13:26:37 -- scripts/common.sh@336 -- # IFS=.-: 00:12:32.376 13:26:37 -- scripts/common.sh@336 -- # read -ra ver2 00:12:32.376 13:26:37 -- scripts/common.sh@337 -- # local 'op=<' 00:12:32.376 13:26:37 -- scripts/common.sh@339 -- # ver1_l=2 00:12:32.376 13:26:37 -- scripts/common.sh@340 -- # ver2_l=1 00:12:32.376 13:26:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:32.376 13:26:37 -- scripts/common.sh@343 -- # case "$op" in 00:12:32.376 13:26:37 -- scripts/common.sh@344 -- # : 1 00:12:32.376 13:26:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:32.376 13:26:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:32.376 13:26:37 -- scripts/common.sh@364 -- # decimal 1 00:12:32.376 13:26:37 -- scripts/common.sh@352 -- # local d=1 00:12:32.376 13:26:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:32.376 13:26:37 -- scripts/common.sh@354 -- # echo 1 00:12:32.376 13:26:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:32.376 13:26:37 -- scripts/common.sh@365 -- # decimal 2 00:12:32.376 13:26:37 -- scripts/common.sh@352 -- # local d=2 00:12:32.376 13:26:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:32.376 13:26:37 -- scripts/common.sh@354 -- # echo 2 00:12:32.376 13:26:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:32.376 13:26:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:32.376 13:26:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:32.376 13:26:37 -- scripts/common.sh@367 -- # return 0 00:12:32.376 13:26:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:32.376 13:26:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:32.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.376 --rc genhtml_branch_coverage=1 00:12:32.376 --rc genhtml_function_coverage=1 00:12:32.376 --rc genhtml_legend=1 00:12:32.376 --rc geninfo_all_blocks=1 00:12:32.376 --rc geninfo_unexecuted_blocks=1 00:12:32.376 00:12:32.376 ' 00:12:32.376 13:26:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:32.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.376 --rc genhtml_branch_coverage=1 00:12:32.377 --rc genhtml_function_coverage=1 00:12:32.377 --rc genhtml_legend=1 00:12:32.377 --rc geninfo_all_blocks=1 00:12:32.377 --rc geninfo_unexecuted_blocks=1 00:12:32.377 00:12:32.377 ' 00:12:32.377 13:26:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:32.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.377 --rc genhtml_branch_coverage=1 00:12:32.377 --rc genhtml_function_coverage=1 00:12:32.377 --rc genhtml_legend=1 00:12:32.377 --rc geninfo_all_blocks=1 00:12:32.377 --rc geninfo_unexecuted_blocks=1 00:12:32.377 00:12:32.377 ' 00:12:32.377 13:26:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:32.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.377 --rc genhtml_branch_coverage=1 00:12:32.377 --rc genhtml_function_coverage=1 00:12:32.377 --rc genhtml_legend=1 00:12:32.377 --rc geninfo_all_blocks=1 00:12:32.377 --rc geninfo_unexecuted_blocks=1 00:12:32.377 00:12:32.377 ' 00:12:32.377 13:26:37 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:32.377 13:26:37 -- nvmf/common.sh@7 -- # uname -s 00:12:32.377 13:26:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.377 13:26:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.377 13:26:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.377 13:26:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.377 13:26:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.377 13:26:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.377 13:26:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.377 13:26:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.377 13:26:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.377 13:26:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.377 13:26:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:12:32.377 13:26:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:12:32.377 13:26:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.377 13:26:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.377 13:26:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:32.377 13:26:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:32.377 13:26:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.377 13:26:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.377 13:26:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.377 13:26:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.377 13:26:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.377 13:26:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.377 13:26:37 -- paths/export.sh@5 -- # export PATH 00:12:32.377 13:26:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.377 13:26:37 -- nvmf/common.sh@46 -- # : 0 00:12:32.377 13:26:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:32.377 13:26:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:32.377 13:26:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:32.377 13:26:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.377 13:26:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.377 13:26:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:32.377 13:26:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:32.377 13:26:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:32.377 13:26:37 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:32.377 13:26:37 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:32.377 13:26:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:32.377 13:26:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.377 13:26:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:32.377 13:26:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:32.377 13:26:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:32.377 13:26:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.377 13:26:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:32.377 13:26:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.377 13:26:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:32.377 13:26:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:32.377 13:26:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:32.377 13:26:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:32.377 13:26:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:32.377 13:26:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:32.377 13:26:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:32.377 13:26:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:32.377 13:26:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:32.377 13:26:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:32.377 13:26:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:32.377 13:26:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:32.377 13:26:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:32.377 13:26:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:32.377 13:26:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:32.377 13:26:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:32.377 13:26:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:32.377 13:26:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:32.377 13:26:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:32.377 13:26:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:32.377 Cannot find device "nvmf_tgt_br" 00:12:32.377 13:26:37 -- nvmf/common.sh@154 -- # true 00:12:32.377 13:26:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:32.377 Cannot find device "nvmf_tgt_br2" 00:12:32.377 13:26:37 -- nvmf/common.sh@155 -- # true 00:12:32.377 13:26:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:32.377 13:26:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:32.377 Cannot find device "nvmf_tgt_br" 00:12:32.377 13:26:37 -- nvmf/common.sh@157 -- # true 00:12:32.377 13:26:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:32.377 Cannot find device "nvmf_tgt_br2" 00:12:32.377 13:26:37 -- nvmf/common.sh@158 -- # true 00:12:32.377 13:26:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:32.377 13:26:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:32.377 13:26:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:32.377 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:32.377 13:26:38 -- nvmf/common.sh@161 -- # true 00:12:32.377 13:26:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:32.636 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:32.636 13:26:38 -- nvmf/common.sh@162 -- # true 00:12:32.636 13:26:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:32.636 13:26:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:32.636 13:26:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:32.636 13:26:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:32.636 13:26:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:32.636 13:26:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:32.636 13:26:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:32.636 13:26:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:32.636 13:26:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:32.636 13:26:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:32.636 13:26:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:32.636 13:26:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:32.636 13:26:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:32.636 13:26:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:32.636 13:26:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:32.636 13:26:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:32.636 13:26:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:32.636 13:26:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:32.636 13:26:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:32.636 13:26:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:32.636 13:26:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:32.636 13:26:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:32.636 13:26:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:32.636 13:26:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:32.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:12:32.636 00:12:32.636 --- 10.0.0.2 ping statistics --- 00:12:32.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.636 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:12:32.636 13:26:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:32.636 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:32.636 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:12:32.636 00:12:32.636 --- 10.0.0.3 ping statistics --- 00:12:32.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.636 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:12:32.636 13:26:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:32.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:12:32.636 00:12:32.636 --- 10.0.0.1 ping statistics --- 00:12:32.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.636 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:12:32.636 13:26:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.636 13:26:38 -- nvmf/common.sh@421 -- # return 0 00:12:32.636 13:26:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:32.636 13:26:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.636 13:26:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:32.636 13:26:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:32.636 13:26:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.636 13:26:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:32.636 13:26:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:32.636 13:26:38 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:32.636 13:26:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:32.636 13:26:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:32.636 13:26:38 -- common/autotest_common.sh@10 -- # set +x 00:12:32.636 13:26:38 -- nvmf/common.sh@469 -- # nvmfpid=77735 00:12:32.636 13:26:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:32.636 13:26:38 -- nvmf/common.sh@470 -- # waitforlisten 77735 00:12:32.636 13:26:38 -- common/autotest_common.sh@829 -- # '[' -z 77735 ']' 00:12:32.636 13:26:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.636 13:26:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:32.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.636 13:26:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.636 13:26:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:32.636 13:26:38 -- common/autotest_common.sh@10 -- # set +x 00:12:32.895 [2024-12-15 13:26:38.354189] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:32.895 [2024-12-15 13:26:38.354279] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.895 [2024-12-15 13:26:38.496102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.895 [2024-12-15 13:26:38.552330] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:32.895 [2024-12-15 13:26:38.552461] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.895 [2024-12-15 13:26:38.552478] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.895 [2024-12-15 13:26:38.552486] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.895 [2024-12-15 13:26:38.552647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.895 [2024-12-15 13:26:38.553134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.895 [2024-12-15 13:26:38.553708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.895 [2024-12-15 13:26:38.553716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.829 13:26:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:33.829 13:26:39 -- common/autotest_common.sh@862 -- # return 0 00:12:33.829 13:26:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:33.829 13:26:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:33.829 13:26:39 -- common/autotest_common.sh@10 -- # set +x 00:12:33.829 13:26:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.829 13:26:39 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:33.829 13:26:39 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:33.829 13:26:39 -- target/multitarget.sh@21 -- # jq length 00:12:33.829 13:26:39 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:33.829 13:26:39 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:34.087 "nvmf_tgt_1" 00:12:34.087 13:26:39 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:34.087 "nvmf_tgt_2" 00:12:34.345 13:26:39 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:34.345 13:26:39 -- target/multitarget.sh@28 -- # jq length 00:12:34.345 13:26:39 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:34.345 13:26:39 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:34.603 true 00:12:34.603 13:26:40 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:34.603 true 00:12:34.603 13:26:40 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:34.603 13:26:40 -- target/multitarget.sh@35 -- # jq length 00:12:34.861 13:26:40 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:34.861 13:26:40 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:34.861 13:26:40 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:34.861 13:26:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:34.861 13:26:40 -- nvmf/common.sh@116 -- # sync 00:12:34.861 13:26:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:34.861 13:26:40 -- nvmf/common.sh@119 -- # set +e 00:12:34.861 13:26:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:34.861 13:26:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:34.861 rmmod nvme_tcp 00:12:34.861 rmmod nvme_fabrics 00:12:34.861 rmmod nvme_keyring 00:12:34.861 13:26:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:34.861 13:26:40 -- nvmf/common.sh@123 -- # set -e 00:12:34.861 13:26:40 -- nvmf/common.sh@124 -- # return 0 00:12:34.861 13:26:40 -- nvmf/common.sh@477 -- # '[' -n 77735 ']' 00:12:34.861 13:26:40 -- nvmf/common.sh@478 -- # killprocess 77735 00:12:34.861 13:26:40 -- common/autotest_common.sh@936 -- # '[' -z 77735 ']' 00:12:34.861 13:26:40 -- common/autotest_common.sh@940 -- # kill -0 77735 00:12:34.861 13:26:40 -- common/autotest_common.sh@941 -- # uname 00:12:34.861 13:26:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:34.861 13:26:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77735 00:12:34.861 13:26:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:34.861 13:26:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:34.861 killing process with pid 77735 00:12:34.861 13:26:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77735' 00:12:34.861 13:26:40 -- common/autotest_common.sh@955 -- # kill 77735 00:12:34.861 13:26:40 -- common/autotest_common.sh@960 -- # wait 77735 00:12:35.119 13:26:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:35.119 13:26:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:35.119 13:26:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:35.119 13:26:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:35.119 13:26:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:35.119 13:26:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.119 13:26:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:35.119 13:26:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.119 13:26:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:35.119 00:12:35.119 real 0m2.952s 00:12:35.119 user 0m9.728s 00:12:35.119 sys 0m0.690s 00:12:35.120 13:26:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:35.120 13:26:40 -- common/autotest_common.sh@10 -- # set +x 00:12:35.120 ************************************ 00:12:35.120 END TEST nvmf_multitarget 00:12:35.120 ************************************ 00:12:35.120 13:26:40 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:35.120 13:26:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:35.120 13:26:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:35.120 13:26:40 -- common/autotest_common.sh@10 -- # set +x 00:12:35.120 ************************************ 00:12:35.120 START TEST nvmf_rpc 00:12:35.120 ************************************ 00:12:35.120 13:26:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:35.120 * Looking for test storage... 00:12:35.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:35.120 13:26:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:35.379 13:26:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:35.379 13:26:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:35.379 13:26:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:35.379 13:26:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:35.379 13:26:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:35.379 13:26:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:35.379 13:26:40 -- scripts/common.sh@335 -- # IFS=.-: 00:12:35.379 13:26:40 -- scripts/common.sh@335 -- # read -ra ver1 00:12:35.379 13:26:40 -- scripts/common.sh@336 -- # IFS=.-: 00:12:35.379 13:26:40 -- scripts/common.sh@336 -- # read -ra ver2 00:12:35.379 13:26:40 -- scripts/common.sh@337 -- # local 'op=<' 00:12:35.379 13:26:40 -- scripts/common.sh@339 -- # ver1_l=2 00:12:35.379 13:26:40 -- scripts/common.sh@340 -- # ver2_l=1 00:12:35.379 13:26:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:35.379 13:26:40 -- scripts/common.sh@343 -- # case "$op" in 00:12:35.379 13:26:40 -- scripts/common.sh@344 -- # : 1 00:12:35.379 13:26:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:35.379 13:26:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:35.379 13:26:40 -- scripts/common.sh@364 -- # decimal 1 00:12:35.379 13:26:40 -- scripts/common.sh@352 -- # local d=1 00:12:35.379 13:26:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:35.379 13:26:40 -- scripts/common.sh@354 -- # echo 1 00:12:35.379 13:26:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:35.379 13:26:40 -- scripts/common.sh@365 -- # decimal 2 00:12:35.379 13:26:40 -- scripts/common.sh@352 -- # local d=2 00:12:35.379 13:26:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:35.379 13:26:40 -- scripts/common.sh@354 -- # echo 2 00:12:35.379 13:26:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:35.379 13:26:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:35.379 13:26:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:35.379 13:26:40 -- scripts/common.sh@367 -- # return 0 00:12:35.379 13:26:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:35.379 13:26:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:35.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.379 --rc genhtml_branch_coverage=1 00:12:35.379 --rc genhtml_function_coverage=1 00:12:35.379 --rc genhtml_legend=1 00:12:35.379 --rc geninfo_all_blocks=1 00:12:35.379 --rc geninfo_unexecuted_blocks=1 00:12:35.379 00:12:35.379 ' 00:12:35.379 13:26:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:35.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.379 --rc genhtml_branch_coverage=1 00:12:35.379 --rc genhtml_function_coverage=1 00:12:35.379 --rc genhtml_legend=1 00:12:35.379 --rc geninfo_all_blocks=1 00:12:35.379 --rc geninfo_unexecuted_blocks=1 00:12:35.379 00:12:35.379 ' 00:12:35.379 13:26:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:35.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.379 --rc genhtml_branch_coverage=1 00:12:35.379 --rc genhtml_function_coverage=1 00:12:35.379 --rc genhtml_legend=1 00:12:35.379 --rc geninfo_all_blocks=1 00:12:35.379 --rc geninfo_unexecuted_blocks=1 00:12:35.379 00:12:35.379 ' 00:12:35.379 13:26:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:35.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.379 --rc genhtml_branch_coverage=1 00:12:35.379 --rc genhtml_function_coverage=1 00:12:35.379 --rc genhtml_legend=1 00:12:35.379 --rc geninfo_all_blocks=1 00:12:35.379 --rc geninfo_unexecuted_blocks=1 00:12:35.379 00:12:35.379 ' 00:12:35.379 13:26:40 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:35.379 13:26:40 -- nvmf/common.sh@7 -- # uname -s 00:12:35.379 13:26:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:35.379 13:26:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:35.379 13:26:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:35.379 13:26:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:35.379 13:26:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:35.379 13:26:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:35.379 13:26:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:35.379 13:26:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:35.379 13:26:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:35.379 13:26:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:35.379 13:26:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:12:35.379 13:26:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:12:35.379 13:26:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:35.379 13:26:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:35.379 13:26:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:35.379 13:26:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:35.379 13:26:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.379 13:26:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.379 13:26:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.379 13:26:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.379 13:26:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.379 13:26:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.379 13:26:40 -- paths/export.sh@5 -- # export PATH 00:12:35.379 13:26:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.379 13:26:40 -- nvmf/common.sh@46 -- # : 0 00:12:35.379 13:26:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:35.379 13:26:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:35.379 13:26:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:35.379 13:26:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:35.379 13:26:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:35.379 13:26:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:35.379 13:26:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:35.379 13:26:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:35.379 13:26:40 -- target/rpc.sh@11 -- # loops=5 00:12:35.379 13:26:40 -- target/rpc.sh@23 -- # nvmftestinit 00:12:35.379 13:26:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:35.379 13:26:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:35.379 13:26:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:35.379 13:26:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:35.379 13:26:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:35.379 13:26:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.379 13:26:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:35.379 13:26:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.379 13:26:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:35.379 13:26:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:35.379 13:26:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:35.379 13:26:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:35.379 13:26:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:35.379 13:26:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:35.380 13:26:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:35.380 13:26:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:35.380 13:26:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:35.380 13:26:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:35.380 13:26:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:35.380 13:26:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:35.380 13:26:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:35.380 13:26:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:35.380 13:26:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:35.380 13:26:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:35.380 13:26:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:35.380 13:26:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:35.380 13:26:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:35.380 13:26:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:35.380 Cannot find device "nvmf_tgt_br" 00:12:35.380 13:26:40 -- nvmf/common.sh@154 -- # true 00:12:35.380 13:26:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:35.380 Cannot find device "nvmf_tgt_br2" 00:12:35.380 13:26:40 -- nvmf/common.sh@155 -- # true 00:12:35.380 13:26:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:35.380 13:26:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:35.380 Cannot find device "nvmf_tgt_br" 00:12:35.380 13:26:40 -- nvmf/common.sh@157 -- # true 00:12:35.380 13:26:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:35.380 Cannot find device "nvmf_tgt_br2" 00:12:35.380 13:26:41 -- nvmf/common.sh@158 -- # true 00:12:35.380 13:26:41 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:35.380 13:26:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:35.380 13:26:41 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:35.380 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:35.380 13:26:41 -- nvmf/common.sh@161 -- # true 00:12:35.380 13:26:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:35.638 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:35.638 13:26:41 -- nvmf/common.sh@162 -- # true 00:12:35.638 13:26:41 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:35.638 13:26:41 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:35.638 13:26:41 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:35.638 13:26:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:35.638 13:26:41 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:35.638 13:26:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:35.638 13:26:41 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:35.638 13:26:41 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:35.638 13:26:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:35.638 13:26:41 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:35.638 13:26:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:35.638 13:26:41 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:35.638 13:26:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:35.638 13:26:41 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:35.638 13:26:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:35.638 13:26:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:35.638 13:26:41 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:35.638 13:26:41 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:35.638 13:26:41 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:35.638 13:26:41 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:35.638 13:26:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:35.638 13:26:41 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:35.638 13:26:41 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:35.638 13:26:41 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:35.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:35.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:12:35.638 00:12:35.638 --- 10.0.0.2 ping statistics --- 00:12:35.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.638 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:12:35.638 13:26:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:35.638 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:35.638 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:12:35.638 00:12:35.638 --- 10.0.0.3 ping statistics --- 00:12:35.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.638 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:12:35.638 13:26:41 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:35.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:35.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:12:35.638 00:12:35.638 --- 10.0.0.1 ping statistics --- 00:12:35.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.638 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:12:35.638 13:26:41 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:35.638 13:26:41 -- nvmf/common.sh@421 -- # return 0 00:12:35.638 13:26:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:35.638 13:26:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.638 13:26:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:35.638 13:26:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:35.638 13:26:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.638 13:26:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:35.638 13:26:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:35.638 13:26:41 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:35.638 13:26:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:35.638 13:26:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:35.638 13:26:41 -- common/autotest_common.sh@10 -- # set +x 00:12:35.638 13:26:41 -- nvmf/common.sh@469 -- # nvmfpid=77970 00:12:35.638 13:26:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:35.638 13:26:41 -- nvmf/common.sh@470 -- # waitforlisten 77970 00:12:35.638 13:26:41 -- common/autotest_common.sh@829 -- # '[' -z 77970 ']' 00:12:35.638 13:26:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.638 13:26:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:35.638 13:26:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.638 13:26:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:35.638 13:26:41 -- common/autotest_common.sh@10 -- # set +x 00:12:35.638 [2024-12-15 13:26:41.297462] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:35.639 [2024-12-15 13:26:41.297551] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.896 [2024-12-15 13:26:41.434266] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:35.897 [2024-12-15 13:26:41.489611] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:35.897 [2024-12-15 13:26:41.489787] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.897 [2024-12-15 13:26:41.489800] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.897 [2024-12-15 13:26:41.489808] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.897 [2024-12-15 13:26:41.490360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.897 [2024-12-15 13:26:41.490598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.897 [2024-12-15 13:26:41.490492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:35.897 [2024-12-15 13:26:41.490609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:36.831 13:26:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:36.831 13:26:42 -- common/autotest_common.sh@862 -- # return 0 00:12:36.831 13:26:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:36.831 13:26:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:36.831 13:26:42 -- common/autotest_common.sh@10 -- # set +x 00:12:36.831 13:26:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.831 13:26:42 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:36.831 13:26:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.831 13:26:42 -- common/autotest_common.sh@10 -- # set +x 00:12:36.831 13:26:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.831 13:26:42 -- target/rpc.sh@26 -- # stats='{ 00:12:36.831 "poll_groups": [ 00:12:36.831 { 00:12:36.831 "admin_qpairs": 0, 00:12:36.831 "completed_nvme_io": 0, 00:12:36.831 "current_admin_qpairs": 0, 00:12:36.831 "current_io_qpairs": 0, 00:12:36.831 "io_qpairs": 0, 00:12:36.831 "name": "nvmf_tgt_poll_group_0", 00:12:36.831 "pending_bdev_io": 0, 00:12:36.831 "transports": [] 00:12:36.831 }, 00:12:36.831 { 00:12:36.831 "admin_qpairs": 0, 00:12:36.831 "completed_nvme_io": 0, 00:12:36.831 "current_admin_qpairs": 0, 00:12:36.831 "current_io_qpairs": 0, 00:12:36.831 "io_qpairs": 0, 00:12:36.831 "name": "nvmf_tgt_poll_group_1", 00:12:36.831 "pending_bdev_io": 0, 00:12:36.831 "transports": [] 00:12:36.831 }, 00:12:36.831 { 00:12:36.831 "admin_qpairs": 0, 00:12:36.831 "completed_nvme_io": 0, 00:12:36.831 "current_admin_qpairs": 0, 00:12:36.831 "current_io_qpairs": 0, 00:12:36.831 "io_qpairs": 0, 00:12:36.831 "name": "nvmf_tgt_poll_group_2", 00:12:36.831 "pending_bdev_io": 0, 00:12:36.831 "transports": [] 00:12:36.831 }, 00:12:36.831 { 00:12:36.831 "admin_qpairs": 0, 00:12:36.831 "completed_nvme_io": 0, 00:12:36.831 "current_admin_qpairs": 0, 00:12:36.831 "current_io_qpairs": 0, 00:12:36.831 "io_qpairs": 0, 00:12:36.831 "name": "nvmf_tgt_poll_group_3", 00:12:36.831 "pending_bdev_io": 0, 00:12:36.831 "transports": [] 00:12:36.831 } 00:12:36.831 ], 00:12:36.831 "tick_rate": 2200000000 00:12:36.831 }' 00:12:36.831 13:26:42 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:36.831 13:26:42 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:36.831 13:26:42 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:36.831 13:26:42 -- target/rpc.sh@15 -- # wc -l 00:12:36.831 13:26:42 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:36.831 13:26:42 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:36.831 13:26:42 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:36.831 13:26:42 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:36.831 13:26:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.831 13:26:42 -- common/autotest_common.sh@10 -- # set +x 00:12:36.831 [2024-12-15 13:26:42.493269] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:36.831 13:26:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.831 13:26:42 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:36.831 13:26:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.831 13:26:42 -- common/autotest_common.sh@10 -- # set +x 00:12:37.090 13:26:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.090 13:26:42 -- target/rpc.sh@33 -- # stats='{ 00:12:37.090 "poll_groups": [ 00:12:37.090 { 00:12:37.090 "admin_qpairs": 0, 00:12:37.090 "completed_nvme_io": 0, 00:12:37.090 "current_admin_qpairs": 0, 00:12:37.090 "current_io_qpairs": 0, 00:12:37.090 "io_qpairs": 0, 00:12:37.090 "name": "nvmf_tgt_poll_group_0", 00:12:37.090 "pending_bdev_io": 0, 00:12:37.090 "transports": [ 00:12:37.090 { 00:12:37.090 "trtype": "TCP" 00:12:37.090 } 00:12:37.090 ] 00:12:37.090 }, 00:12:37.090 { 00:12:37.090 "admin_qpairs": 0, 00:12:37.090 "completed_nvme_io": 0, 00:12:37.090 "current_admin_qpairs": 0, 00:12:37.090 "current_io_qpairs": 0, 00:12:37.090 "io_qpairs": 0, 00:12:37.090 "name": "nvmf_tgt_poll_group_1", 00:12:37.090 "pending_bdev_io": 0, 00:12:37.090 "transports": [ 00:12:37.090 { 00:12:37.090 "trtype": "TCP" 00:12:37.090 } 00:12:37.090 ] 00:12:37.090 }, 00:12:37.090 { 00:12:37.090 "admin_qpairs": 0, 00:12:37.090 "completed_nvme_io": 0, 00:12:37.090 "current_admin_qpairs": 0, 00:12:37.090 "current_io_qpairs": 0, 00:12:37.090 "io_qpairs": 0, 00:12:37.090 "name": "nvmf_tgt_poll_group_2", 00:12:37.090 "pending_bdev_io": 0, 00:12:37.090 "transports": [ 00:12:37.090 { 00:12:37.090 "trtype": "TCP" 00:12:37.090 } 00:12:37.090 ] 00:12:37.090 }, 00:12:37.090 { 00:12:37.090 "admin_qpairs": 0, 00:12:37.090 "completed_nvme_io": 0, 00:12:37.090 "current_admin_qpairs": 0, 00:12:37.090 "current_io_qpairs": 0, 00:12:37.090 "io_qpairs": 0, 00:12:37.090 "name": "nvmf_tgt_poll_group_3", 00:12:37.090 "pending_bdev_io": 0, 00:12:37.090 "transports": [ 00:12:37.090 { 00:12:37.090 "trtype": "TCP" 00:12:37.090 } 00:12:37.090 ] 00:12:37.090 } 00:12:37.090 ], 00:12:37.090 "tick_rate": 2200000000 00:12:37.090 }' 00:12:37.090 13:26:42 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:37.090 13:26:42 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:37.090 13:26:42 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:37.090 13:26:42 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:37.090 13:26:42 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:37.090 13:26:42 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:37.090 13:26:42 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:37.090 13:26:42 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:37.090 13:26:42 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:37.090 13:26:42 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:37.090 13:26:42 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:37.090 13:26:42 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:37.090 13:26:42 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:37.090 13:26:42 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:37.090 13:26:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.090 13:26:42 -- common/autotest_common.sh@10 -- # set +x 00:12:37.090 Malloc1 00:12:37.090 13:26:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.090 13:26:42 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:37.090 13:26:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.090 13:26:42 -- common/autotest_common.sh@10 -- # set +x 00:12:37.090 13:26:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.090 13:26:42 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:37.090 13:26:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.090 13:26:42 -- common/autotest_common.sh@10 -- # set +x 00:12:37.090 13:26:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.090 13:26:42 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:37.090 13:26:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.090 13:26:42 -- common/autotest_common.sh@10 -- # set +x 00:12:37.090 13:26:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.090 13:26:42 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.090 13:26:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.090 13:26:42 -- common/autotest_common.sh@10 -- # set +x 00:12:37.090 [2024-12-15 13:26:42.695393] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.090 13:26:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.090 13:26:42 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 -a 10.0.0.2 -s 4420 00:12:37.090 13:26:42 -- common/autotest_common.sh@650 -- # local es=0 00:12:37.090 13:26:42 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 -a 10.0.0.2 -s 4420 00:12:37.090 13:26:42 -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:37.090 13:26:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:37.090 13:26:42 -- common/autotest_common.sh@642 -- # type -t nvme 00:12:37.090 13:26:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:37.090 13:26:42 -- common/autotest_common.sh@644 -- # type -P nvme 00:12:37.090 13:26:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:37.090 13:26:42 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:37.090 13:26:42 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:37.090 13:26:42 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 -a 10.0.0.2 -s 4420 00:12:37.090 [2024-12-15 13:26:42.723843] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35' 00:12:37.090 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:37.090 could not add new controller: failed to write to nvme-fabrics device 00:12:37.090 13:26:42 -- common/autotest_common.sh@653 -- # es=1 00:12:37.090 13:26:42 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:37.090 13:26:42 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:37.090 13:26:42 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:37.090 13:26:42 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:12:37.090 13:26:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.090 13:26:42 -- common/autotest_common.sh@10 -- # set +x 00:12:37.090 13:26:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.090 13:26:42 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:37.348 13:26:42 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:37.348 13:26:42 -- common/autotest_common.sh@1187 -- # local i=0 00:12:37.348 13:26:42 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:37.348 13:26:42 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:37.348 13:26:42 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:39.249 13:26:44 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:39.249 13:26:44 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:39.249 13:26:44 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:39.249 13:26:44 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:39.249 13:26:44 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:39.249 13:26:44 -- common/autotest_common.sh@1197 -- # return 0 00:12:39.249 13:26:44 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:39.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.508 13:26:44 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:39.508 13:26:44 -- common/autotest_common.sh@1208 -- # local i=0 00:12:39.508 13:26:44 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:39.508 13:26:44 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.508 13:26:44 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:39.508 13:26:44 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.508 13:26:44 -- common/autotest_common.sh@1220 -- # return 0 00:12:39.508 13:26:44 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:12:39.508 13:26:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.508 13:26:44 -- common/autotest_common.sh@10 -- # set +x 00:12:39.508 13:26:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.508 13:26:45 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:39.508 13:26:45 -- common/autotest_common.sh@650 -- # local es=0 00:12:39.508 13:26:45 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:39.508 13:26:45 -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:39.508 13:26:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:39.508 13:26:45 -- common/autotest_common.sh@642 -- # type -t nvme 00:12:39.508 13:26:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:39.508 13:26:45 -- common/autotest_common.sh@644 -- # type -P nvme 00:12:39.508 13:26:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:39.508 13:26:45 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:39.508 13:26:45 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:39.508 13:26:45 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:39.508 [2024-12-15 13:26:45.034842] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35' 00:12:39.508 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:39.508 could not add new controller: failed to write to nvme-fabrics device 00:12:39.508 13:26:45 -- common/autotest_common.sh@653 -- # es=1 00:12:39.508 13:26:45 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:39.508 13:26:45 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:39.508 13:26:45 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:39.508 13:26:45 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:39.508 13:26:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.508 13:26:45 -- common/autotest_common.sh@10 -- # set +x 00:12:39.508 13:26:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.508 13:26:45 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:39.766 13:26:45 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:39.766 13:26:45 -- common/autotest_common.sh@1187 -- # local i=0 00:12:39.766 13:26:45 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:39.766 13:26:45 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:39.766 13:26:45 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:41.668 13:26:47 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:41.668 13:26:47 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:41.668 13:26:47 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:41.668 13:26:47 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:41.668 13:26:47 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.668 13:26:47 -- common/autotest_common.sh@1197 -- # return 0 00:12:41.668 13:26:47 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.668 13:26:47 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:41.668 13:26:47 -- common/autotest_common.sh@1208 -- # local i=0 00:12:41.668 13:26:47 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:41.668 13:26:47 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.668 13:26:47 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:41.668 13:26:47 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.668 13:26:47 -- common/autotest_common.sh@1220 -- # return 0 00:12:41.668 13:26:47 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.668 13:26:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.668 13:26:47 -- common/autotest_common.sh@10 -- # set +x 00:12:41.668 13:26:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.668 13:26:47 -- target/rpc.sh@81 -- # seq 1 5 00:12:41.668 13:26:47 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:41.668 13:26:47 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.668 13:26:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.668 13:26:47 -- common/autotest_common.sh@10 -- # set +x 00:12:41.668 13:26:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.668 13:26:47 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.668 13:26:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.668 13:26:47 -- common/autotest_common.sh@10 -- # set +x 00:12:41.668 [2024-12-15 13:26:47.341113] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.668 13:26:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.668 13:26:47 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:41.668 13:26:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.668 13:26:47 -- common/autotest_common.sh@10 -- # set +x 00:12:41.668 13:26:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.668 13:26:47 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.668 13:26:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.668 13:26:47 -- common/autotest_common.sh@10 -- # set +x 00:12:41.927 13:26:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.927 13:26:47 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.927 13:26:47 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:41.927 13:26:47 -- common/autotest_common.sh@1187 -- # local i=0 00:12:41.927 13:26:47 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.927 13:26:47 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:41.927 13:26:47 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:44.531 13:26:49 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:44.531 13:26:49 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:44.531 13:26:49 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:44.531 13:26:49 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:44.531 13:26:49 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:44.531 13:26:49 -- common/autotest_common.sh@1197 -- # return 0 00:12:44.531 13:26:49 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:44.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.531 13:26:49 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:44.531 13:26:49 -- common/autotest_common.sh@1208 -- # local i=0 00:12:44.531 13:26:49 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:44.531 13:26:49 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.531 13:26:49 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:44.531 13:26:49 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.531 13:26:49 -- common/autotest_common.sh@1220 -- # return 0 00:12:44.531 13:26:49 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:44.531 13:26:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.531 13:26:49 -- common/autotest_common.sh@10 -- # set +x 00:12:44.531 13:26:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.531 13:26:49 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.531 13:26:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.531 13:26:49 -- common/autotest_common.sh@10 -- # set +x 00:12:44.531 13:26:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.531 13:26:49 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:44.531 13:26:49 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:44.531 13:26:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.531 13:26:49 -- common/autotest_common.sh@10 -- # set +x 00:12:44.531 13:26:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.531 13:26:49 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.531 13:26:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.531 13:26:49 -- common/autotest_common.sh@10 -- # set +x 00:12:44.531 [2024-12-15 13:26:49.639921] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.531 13:26:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.531 13:26:49 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:44.531 13:26:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.531 13:26:49 -- common/autotest_common.sh@10 -- # set +x 00:12:44.531 13:26:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.531 13:26:49 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:44.531 13:26:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.531 13:26:49 -- common/autotest_common.sh@10 -- # set +x 00:12:44.531 13:26:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.531 13:26:49 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:44.531 13:26:49 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:44.531 13:26:49 -- common/autotest_common.sh@1187 -- # local i=0 00:12:44.531 13:26:49 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:44.531 13:26:49 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:44.531 13:26:49 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:46.450 13:26:51 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:46.450 13:26:51 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:46.450 13:26:51 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:46.450 13:26:51 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:46.450 13:26:51 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:46.450 13:26:51 -- common/autotest_common.sh@1197 -- # return 0 00:12:46.450 13:26:51 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:46.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.450 13:26:51 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:46.450 13:26:51 -- common/autotest_common.sh@1208 -- # local i=0 00:12:46.450 13:26:51 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:46.450 13:26:51 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.450 13:26:51 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:46.450 13:26:51 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.450 13:26:51 -- common/autotest_common.sh@1220 -- # return 0 00:12:46.450 13:26:51 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:46.450 13:26:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.450 13:26:51 -- common/autotest_common.sh@10 -- # set +x 00:12:46.450 13:26:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.451 13:26:51 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.451 13:26:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.451 13:26:51 -- common/autotest_common.sh@10 -- # set +x 00:12:46.451 13:26:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.451 13:26:51 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:46.451 13:26:51 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:46.451 13:26:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.451 13:26:51 -- common/autotest_common.sh@10 -- # set +x 00:12:46.451 13:26:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.451 13:26:51 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.451 13:26:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.451 13:26:51 -- common/autotest_common.sh@10 -- # set +x 00:12:46.451 [2024-12-15 13:26:51.962787] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.451 13:26:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.451 13:26:51 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:46.451 13:26:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.451 13:26:51 -- common/autotest_common.sh@10 -- # set +x 00:12:46.451 13:26:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.451 13:26:51 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:46.451 13:26:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.451 13:26:51 -- common/autotest_common.sh@10 -- # set +x 00:12:46.451 13:26:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.451 13:26:51 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.709 13:26:52 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:46.709 13:26:52 -- common/autotest_common.sh@1187 -- # local i=0 00:12:46.709 13:26:52 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.709 13:26:52 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:46.709 13:26:52 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:48.612 13:26:54 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:48.612 13:26:54 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:48.612 13:26:54 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:48.612 13:26:54 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:48.612 13:26:54 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:48.612 13:26:54 -- common/autotest_common.sh@1197 -- # return 0 00:12:48.612 13:26:54 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:48.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.871 13:26:54 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:48.871 13:26:54 -- common/autotest_common.sh@1208 -- # local i=0 00:12:48.871 13:26:54 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:48.871 13:26:54 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.871 13:26:54 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:48.871 13:26:54 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.871 13:26:54 -- common/autotest_common.sh@1220 -- # return 0 00:12:48.871 13:26:54 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:48.871 13:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.871 13:26:54 -- common/autotest_common.sh@10 -- # set +x 00:12:48.871 13:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.871 13:26:54 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.871 13:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.871 13:26:54 -- common/autotest_common.sh@10 -- # set +x 00:12:48.871 13:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.871 13:26:54 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:48.871 13:26:54 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:48.871 13:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.871 13:26:54 -- common/autotest_common.sh@10 -- # set +x 00:12:48.871 13:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.871 13:26:54 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.871 13:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.871 13:26:54 -- common/autotest_common.sh@10 -- # set +x 00:12:48.871 [2024-12-15 13:26:54.366466] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.871 13:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.871 13:26:54 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:48.871 13:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.871 13:26:54 -- common/autotest_common.sh@10 -- # set +x 00:12:48.871 13:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.871 13:26:54 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:48.871 13:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.871 13:26:54 -- common/autotest_common.sh@10 -- # set +x 00:12:48.871 13:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.871 13:26:54 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.871 13:26:54 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:48.871 13:26:54 -- common/autotest_common.sh@1187 -- # local i=0 00:12:48.871 13:26:54 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:48.871 13:26:54 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:48.871 13:26:54 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:51.403 13:26:56 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:51.403 13:26:56 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:51.403 13:26:56 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:51.403 13:26:56 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:51.403 13:26:56 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:51.403 13:26:56 -- common/autotest_common.sh@1197 -- # return 0 00:12:51.403 13:26:56 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:51.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.403 13:26:56 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:51.403 13:26:56 -- common/autotest_common.sh@1208 -- # local i=0 00:12:51.403 13:26:56 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:51.403 13:26:56 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.403 13:26:56 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.403 13:26:56 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:51.403 13:26:56 -- common/autotest_common.sh@1220 -- # return 0 00:12:51.403 13:26:56 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:51.403 13:26:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.403 13:26:56 -- common/autotest_common.sh@10 -- # set +x 00:12:51.403 13:26:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.403 13:26:56 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.403 13:26:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.403 13:26:56 -- common/autotest_common.sh@10 -- # set +x 00:12:51.403 13:26:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.403 13:26:56 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:51.403 13:26:56 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.403 13:26:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.403 13:26:56 -- common/autotest_common.sh@10 -- # set +x 00:12:51.403 13:26:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.403 13:26:56 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.403 13:26:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.403 13:26:56 -- common/autotest_common.sh@10 -- # set +x 00:12:51.403 [2024-12-15 13:26:56.665411] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.403 13:26:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.403 13:26:56 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:51.403 13:26:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.403 13:26:56 -- common/autotest_common.sh@10 -- # set +x 00:12:51.403 13:26:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.403 13:26:56 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.403 13:26:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.403 13:26:56 -- common/autotest_common.sh@10 -- # set +x 00:12:51.403 13:26:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.403 13:26:56 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:51.403 13:26:56 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:51.403 13:26:56 -- common/autotest_common.sh@1187 -- # local i=0 00:12:51.403 13:26:56 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:51.403 13:26:56 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:51.403 13:26:56 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:53.307 13:26:58 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:53.307 13:26:58 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:53.307 13:26:58 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:53.307 13:26:58 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:53.307 13:26:58 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:53.307 13:26:58 -- common/autotest_common.sh@1197 -- # return 0 00:12:53.307 13:26:58 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:53.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.307 13:26:58 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:53.307 13:26:58 -- common/autotest_common.sh@1208 -- # local i=0 00:12:53.307 13:26:58 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:53.307 13:26:58 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.307 13:26:58 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.307 13:26:58 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:53.307 13:26:58 -- common/autotest_common.sh@1220 -- # return 0 00:12:53.307 13:26:58 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:53.307 13:26:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.307 13:26:58 -- common/autotest_common.sh@10 -- # set +x 00:12:53.307 13:26:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.307 13:26:58 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.307 13:26:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.307 13:26:58 -- common/autotest_common.sh@10 -- # set +x 00:12:53.307 13:26:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.307 13:26:58 -- target/rpc.sh@99 -- # seq 1 5 00:12:53.307 13:26:58 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:53.307 13:26:58 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:53.307 13:26:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.307 13:26:58 -- common/autotest_common.sh@10 -- # set +x 00:12:53.307 13:26:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.307 13:26:58 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.307 13:26:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.307 13:26:58 -- common/autotest_common.sh@10 -- # set +x 00:12:53.307 [2024-12-15 13:26:58.980110] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.307 13:26:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.307 13:26:58 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:53.307 13:26:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.307 13:26:58 -- common/autotest_common.sh@10 -- # set +x 00:12:53.307 13:26:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.307 13:26:58 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:53.307 13:26:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.307 13:26:58 -- common/autotest_common.sh@10 -- # set +x 00:12:53.566 13:26:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.566 13:26:58 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.566 13:26:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.566 13:26:59 -- common/autotest_common.sh@10 -- # set +x 00:12:53.566 13:26:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.566 13:26:59 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.566 13:26:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.566 13:26:59 -- common/autotest_common.sh@10 -- # set +x 00:12:53.566 13:26:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.566 13:26:59 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:53.566 13:26:59 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:53.566 13:26:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.566 13:26:59 -- common/autotest_common.sh@10 -- # set +x 00:12:53.566 13:26:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.566 13:26:59 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.566 13:26:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.566 13:26:59 -- common/autotest_common.sh@10 -- # set +x 00:12:53.566 [2024-12-15 13:26:59.028113] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.566 13:26:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.566 13:26:59 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:53.566 13:26:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.566 13:26:59 -- common/autotest_common.sh@10 -- # set +x 00:12:53.566 13:26:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.566 13:26:59 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:53.566 13:26:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.566 13:26:59 -- common/autotest_common.sh@10 -- # set +x 00:12:53.567 13:26:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.567 13:26:59 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.567 13:26:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.567 13:26:59 -- common/autotest_common.sh@10 -- # set +x 00:12:53.567 13:26:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.567 13:26:59 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.567 13:26:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.567 13:26:59 -- common/autotest_common.sh@10 -- # set +x 00:12:53.567 13:26:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.567 13:26:59 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:53.567 13:26:59 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:53.567 13:26:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.567 13:26:59 -- common/autotest_common.sh@10 -- # set +x 00:12:53.567 13:26:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.567 13:26:59 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.567 13:26:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.567 13:26:59 -- common/autotest_common.sh@10 -- # set +x 00:12:53.567 [2024-12-15 13:26:59.080189] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.567 13:26:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.567 13:26:59 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:53.567 13:26:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.567 13:26:59 -- common/autotest_common.sh@10 -- # set +x 00:12:53.567 13:26:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.567 13:26:59 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:53.567 13:26:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.567 13:26:59 -- common/autotest_common.sh@10 -- # set +x 00:12:53.567 13:26:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.567 13:26:59 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.567 13:26:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.567 13:26:59 -- common/autotest_common.sh@10 -- # set +x 00:12:53.567 13:26:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.567 13:26:59 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.567 13:26:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.567 13:26:59 -- common/autotest_common.sh@10 -- # set +x 00:12:53.567 13:26:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.567 13:26:59 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:53.567 13:26:59 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:53.567 13:26:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.567 13:26:59 -- common/autotest_common.sh@10 -- # set +x 00:12:53.567 13:26:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.567 13:26:59 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.567 13:26:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.567 13:26:59 -- common/autotest_common.sh@10 -- # set +x 00:12:53.567 [2024-12-15 13:26:59.128234] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.567 13:26:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.567 13:26:59 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:53.567 13:26:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.567 13:26:59 -- common/autotest_common.sh@10 -- # set +x 00:12:53.567 13:26:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.567 13:26:59 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:53.567 13:26:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.567 13:26:59 -- common/autotest_common.sh@10 -- # set +x 00:12:53.567 13:26:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.567 13:26:59 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.567 13:26:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.567 13:26:59 -- common/autotest_common.sh@10 -- # set +x 00:12:53.567 13:26:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.567 13:26:59 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.567 13:26:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.567 13:26:59 -- common/autotest_common.sh@10 -- # set +x 00:12:53.567 13:26:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.567 13:26:59 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:53.567 13:26:59 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:53.567 13:26:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.567 13:26:59 -- common/autotest_common.sh@10 -- # set +x 00:12:53.567 13:26:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.567 13:26:59 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.567 13:26:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.567 13:26:59 -- common/autotest_common.sh@10 -- # set +x 00:12:53.567 [2024-12-15 13:26:59.176274] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.567 13:26:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.567 13:26:59 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:53.567 13:26:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.567 13:26:59 -- common/autotest_common.sh@10 -- # set +x 00:12:53.567 13:26:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.567 13:26:59 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:53.567 13:26:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.567 13:26:59 -- common/autotest_common.sh@10 -- # set +x 00:12:53.567 13:26:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.567 13:26:59 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.567 13:26:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.567 13:26:59 -- common/autotest_common.sh@10 -- # set +x 00:12:53.567 13:26:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.567 13:26:59 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.567 13:26:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.567 13:26:59 -- common/autotest_common.sh@10 -- # set +x 00:12:53.567 13:26:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.567 13:26:59 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:53.567 13:26:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.567 13:26:59 -- common/autotest_common.sh@10 -- # set +x 00:12:53.567 13:26:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.567 13:26:59 -- target/rpc.sh@110 -- # stats='{ 00:12:53.567 "poll_groups": [ 00:12:53.567 { 00:12:53.567 "admin_qpairs": 2, 00:12:53.567 "completed_nvme_io": 115, 00:12:53.567 "current_admin_qpairs": 0, 00:12:53.567 "current_io_qpairs": 0, 00:12:53.567 "io_qpairs": 16, 00:12:53.567 "name": "nvmf_tgt_poll_group_0", 00:12:53.567 "pending_bdev_io": 0, 00:12:53.567 "transports": [ 00:12:53.567 { 00:12:53.567 "trtype": "TCP" 00:12:53.567 } 00:12:53.567 ] 00:12:53.567 }, 00:12:53.567 { 00:12:53.567 "admin_qpairs": 3, 00:12:53.567 "completed_nvme_io": 67, 00:12:53.567 "current_admin_qpairs": 0, 00:12:53.567 "current_io_qpairs": 0, 00:12:53.567 "io_qpairs": 17, 00:12:53.567 "name": "nvmf_tgt_poll_group_1", 00:12:53.567 "pending_bdev_io": 0, 00:12:53.567 "transports": [ 00:12:53.567 { 00:12:53.567 "trtype": "TCP" 00:12:53.567 } 00:12:53.567 ] 00:12:53.567 }, 00:12:53.567 { 00:12:53.567 "admin_qpairs": 1, 00:12:53.567 "completed_nvme_io": 71, 00:12:53.567 "current_admin_qpairs": 0, 00:12:53.567 "current_io_qpairs": 0, 00:12:53.567 "io_qpairs": 19, 00:12:53.567 "name": "nvmf_tgt_poll_group_2", 00:12:53.567 "pending_bdev_io": 0, 00:12:53.567 "transports": [ 00:12:53.567 { 00:12:53.567 "trtype": "TCP" 00:12:53.567 } 00:12:53.567 ] 00:12:53.567 }, 00:12:53.567 { 00:12:53.567 "admin_qpairs": 1, 00:12:53.567 "completed_nvme_io": 167, 00:12:53.567 "current_admin_qpairs": 0, 00:12:53.567 "current_io_qpairs": 0, 00:12:53.567 "io_qpairs": 18, 00:12:53.567 "name": "nvmf_tgt_poll_group_3", 00:12:53.567 "pending_bdev_io": 0, 00:12:53.567 "transports": [ 00:12:53.567 { 00:12:53.567 "trtype": "TCP" 00:12:53.567 } 00:12:53.567 ] 00:12:53.567 } 00:12:53.567 ], 00:12:53.567 "tick_rate": 2200000000 00:12:53.567 }' 00:12:53.567 13:26:59 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:53.567 13:26:59 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:53.567 13:26:59 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:53.567 13:26:59 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:53.826 13:26:59 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:53.826 13:26:59 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:53.826 13:26:59 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:53.826 13:26:59 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:53.826 13:26:59 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:53.826 13:26:59 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:12:53.826 13:26:59 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:53.826 13:26:59 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:53.826 13:26:59 -- target/rpc.sh@123 -- # nvmftestfini 00:12:53.826 13:26:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:53.826 13:26:59 -- nvmf/common.sh@116 -- # sync 00:12:53.826 13:26:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:53.826 13:26:59 -- nvmf/common.sh@119 -- # set +e 00:12:53.826 13:26:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:53.826 13:26:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:53.826 rmmod nvme_tcp 00:12:53.826 rmmod nvme_fabrics 00:12:53.826 rmmod nvme_keyring 00:12:53.826 13:26:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:53.826 13:26:59 -- nvmf/common.sh@123 -- # set -e 00:12:53.826 13:26:59 -- nvmf/common.sh@124 -- # return 0 00:12:53.826 13:26:59 -- nvmf/common.sh@477 -- # '[' -n 77970 ']' 00:12:53.826 13:26:59 -- nvmf/common.sh@478 -- # killprocess 77970 00:12:53.826 13:26:59 -- common/autotest_common.sh@936 -- # '[' -z 77970 ']' 00:12:53.826 13:26:59 -- common/autotest_common.sh@940 -- # kill -0 77970 00:12:53.826 13:26:59 -- common/autotest_common.sh@941 -- # uname 00:12:53.826 13:26:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:53.826 13:26:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77970 00:12:53.826 13:26:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:53.826 13:26:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:53.826 13:26:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77970' 00:12:53.826 killing process with pid 77970 00:12:53.826 13:26:59 -- common/autotest_common.sh@955 -- # kill 77970 00:12:53.826 13:26:59 -- common/autotest_common.sh@960 -- # wait 77970 00:12:54.085 13:26:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:54.085 13:26:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:54.085 13:26:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:54.085 13:26:59 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:54.085 13:26:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:54.085 13:26:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.085 13:26:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:54.085 13:26:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.085 13:26:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:54.085 00:12:54.085 real 0m18.975s 00:12:54.085 user 1m11.709s 00:12:54.085 sys 0m2.617s 00:12:54.085 13:26:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:54.085 13:26:59 -- common/autotest_common.sh@10 -- # set +x 00:12:54.085 ************************************ 00:12:54.085 END TEST nvmf_rpc 00:12:54.085 ************************************ 00:12:54.085 13:26:59 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:54.085 13:26:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:54.085 13:26:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:54.085 13:26:59 -- common/autotest_common.sh@10 -- # set +x 00:12:54.085 ************************************ 00:12:54.085 START TEST nvmf_invalid 00:12:54.085 ************************************ 00:12:54.085 13:26:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:54.344 * Looking for test storage... 00:12:54.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:54.344 13:26:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:54.344 13:26:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:54.344 13:26:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:54.344 13:26:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:54.344 13:26:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:54.344 13:26:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:54.344 13:26:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:54.344 13:26:59 -- scripts/common.sh@335 -- # IFS=.-: 00:12:54.344 13:26:59 -- scripts/common.sh@335 -- # read -ra ver1 00:12:54.344 13:26:59 -- scripts/common.sh@336 -- # IFS=.-: 00:12:54.344 13:26:59 -- scripts/common.sh@336 -- # read -ra ver2 00:12:54.344 13:26:59 -- scripts/common.sh@337 -- # local 'op=<' 00:12:54.344 13:26:59 -- scripts/common.sh@339 -- # ver1_l=2 00:12:54.344 13:26:59 -- scripts/common.sh@340 -- # ver2_l=1 00:12:54.344 13:26:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:54.344 13:26:59 -- scripts/common.sh@343 -- # case "$op" in 00:12:54.344 13:26:59 -- scripts/common.sh@344 -- # : 1 00:12:54.344 13:26:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:54.344 13:26:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:54.344 13:26:59 -- scripts/common.sh@364 -- # decimal 1 00:12:54.344 13:26:59 -- scripts/common.sh@352 -- # local d=1 00:12:54.344 13:26:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:54.344 13:26:59 -- scripts/common.sh@354 -- # echo 1 00:12:54.344 13:26:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:54.344 13:26:59 -- scripts/common.sh@365 -- # decimal 2 00:12:54.344 13:26:59 -- scripts/common.sh@352 -- # local d=2 00:12:54.344 13:26:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:54.344 13:26:59 -- scripts/common.sh@354 -- # echo 2 00:12:54.344 13:26:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:54.345 13:26:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:54.345 13:26:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:54.345 13:26:59 -- scripts/common.sh@367 -- # return 0 00:12:54.345 13:26:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:54.345 13:26:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:54.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.345 --rc genhtml_branch_coverage=1 00:12:54.345 --rc genhtml_function_coverage=1 00:12:54.345 --rc genhtml_legend=1 00:12:54.345 --rc geninfo_all_blocks=1 00:12:54.345 --rc geninfo_unexecuted_blocks=1 00:12:54.345 00:12:54.345 ' 00:12:54.345 13:26:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:54.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.345 --rc genhtml_branch_coverage=1 00:12:54.345 --rc genhtml_function_coverage=1 00:12:54.345 --rc genhtml_legend=1 00:12:54.345 --rc geninfo_all_blocks=1 00:12:54.345 --rc geninfo_unexecuted_blocks=1 00:12:54.345 00:12:54.345 ' 00:12:54.345 13:26:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:54.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.345 --rc genhtml_branch_coverage=1 00:12:54.345 --rc genhtml_function_coverage=1 00:12:54.345 --rc genhtml_legend=1 00:12:54.345 --rc geninfo_all_blocks=1 00:12:54.345 --rc geninfo_unexecuted_blocks=1 00:12:54.345 00:12:54.345 ' 00:12:54.345 13:26:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:54.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.345 --rc genhtml_branch_coverage=1 00:12:54.345 --rc genhtml_function_coverage=1 00:12:54.345 --rc genhtml_legend=1 00:12:54.345 --rc geninfo_all_blocks=1 00:12:54.345 --rc geninfo_unexecuted_blocks=1 00:12:54.345 00:12:54.345 ' 00:12:54.345 13:26:59 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:54.345 13:26:59 -- nvmf/common.sh@7 -- # uname -s 00:12:54.345 13:26:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.345 13:26:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.345 13:26:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.345 13:26:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.345 13:26:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.345 13:26:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.345 13:26:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.345 13:26:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.345 13:26:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.345 13:26:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.345 13:26:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:12:54.345 13:26:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:12:54.345 13:26:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.345 13:26:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.345 13:26:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:54.345 13:26:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:54.345 13:26:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.345 13:26:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.345 13:26:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.345 13:26:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.345 13:26:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.345 13:26:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.345 13:26:59 -- paths/export.sh@5 -- # export PATH 00:12:54.345 13:26:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.345 13:26:59 -- nvmf/common.sh@46 -- # : 0 00:12:54.345 13:26:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:54.345 13:26:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:54.345 13:26:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:54.345 13:26:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.345 13:26:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.345 13:26:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:54.345 13:26:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:54.345 13:26:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:54.345 13:26:59 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:54.345 13:26:59 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:54.345 13:26:59 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:54.345 13:26:59 -- target/invalid.sh@14 -- # target=foobar 00:12:54.345 13:26:59 -- target/invalid.sh@16 -- # RANDOM=0 00:12:54.345 13:26:59 -- target/invalid.sh@34 -- # nvmftestinit 00:12:54.345 13:26:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:54.345 13:26:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.345 13:26:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:54.345 13:26:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:54.345 13:26:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:54.345 13:26:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.345 13:26:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:54.345 13:26:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.345 13:26:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:54.345 13:26:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:54.345 13:26:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:54.345 13:26:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:54.345 13:26:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:54.345 13:26:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:54.345 13:26:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.345 13:26:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.345 13:26:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:54.345 13:26:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:54.345 13:26:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:54.345 13:26:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:54.345 13:26:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:54.345 13:26:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.345 13:26:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:54.345 13:26:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:54.345 13:26:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:54.345 13:26:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:54.345 13:26:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:54.345 13:27:00 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:54.345 Cannot find device "nvmf_tgt_br" 00:12:54.345 13:27:00 -- nvmf/common.sh@154 -- # true 00:12:54.345 13:27:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:54.345 Cannot find device "nvmf_tgt_br2" 00:12:54.345 13:27:00 -- nvmf/common.sh@155 -- # true 00:12:54.345 13:27:00 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:54.604 13:27:00 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:54.604 Cannot find device "nvmf_tgt_br" 00:12:54.604 13:27:00 -- nvmf/common.sh@157 -- # true 00:12:54.604 13:27:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:54.604 Cannot find device "nvmf_tgt_br2" 00:12:54.604 13:27:00 -- nvmf/common.sh@158 -- # true 00:12:54.604 13:27:00 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:54.604 13:27:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:54.604 13:27:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:54.604 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:54.604 13:27:00 -- nvmf/common.sh@161 -- # true 00:12:54.604 13:27:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:54.604 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:54.604 13:27:00 -- nvmf/common.sh@162 -- # true 00:12:54.604 13:27:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:54.604 13:27:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:54.604 13:27:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:54.604 13:27:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:54.604 13:27:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:54.604 13:27:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:54.604 13:27:00 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:54.604 13:27:00 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:54.604 13:27:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:54.604 13:27:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:54.604 13:27:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:54.604 13:27:00 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:54.604 13:27:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:54.604 13:27:00 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:54.863 13:27:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:54.863 13:27:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:54.863 13:27:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:54.863 13:27:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:54.863 13:27:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:54.863 13:27:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:54.863 13:27:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:54.863 13:27:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:54.863 13:27:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:54.863 13:27:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:54.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:12:54.863 00:12:54.863 --- 10.0.0.2 ping statistics --- 00:12:54.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.863 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:12:54.863 13:27:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:54.863 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:54.863 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:12:54.863 00:12:54.863 --- 10.0.0.3 ping statistics --- 00:12:54.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.863 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:12:54.863 13:27:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:54.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:12:54.863 00:12:54.863 --- 10.0.0.1 ping statistics --- 00:12:54.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.863 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:12:54.863 13:27:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.863 13:27:00 -- nvmf/common.sh@421 -- # return 0 00:12:54.863 13:27:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:54.863 13:27:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.863 13:27:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:54.863 13:27:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:54.863 13:27:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.863 13:27:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:54.863 13:27:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:54.863 13:27:00 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:54.863 13:27:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:54.863 13:27:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:54.863 13:27:00 -- common/autotest_common.sh@10 -- # set +x 00:12:54.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.863 13:27:00 -- nvmf/common.sh@469 -- # nvmfpid=78494 00:12:54.863 13:27:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:54.863 13:27:00 -- nvmf/common.sh@470 -- # waitforlisten 78494 00:12:54.863 13:27:00 -- common/autotest_common.sh@829 -- # '[' -z 78494 ']' 00:12:54.863 13:27:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.863 13:27:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:54.863 13:27:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.863 13:27:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:54.863 13:27:00 -- common/autotest_common.sh@10 -- # set +x 00:12:54.863 [2024-12-15 13:27:00.444352] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:54.863 [2024-12-15 13:27:00.444653] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.122 [2024-12-15 13:27:00.583686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:55.122 [2024-12-15 13:27:00.639594] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:55.122 [2024-12-15 13:27:00.640031] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:55.122 [2024-12-15 13:27:00.640144] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:55.122 [2024-12-15 13:27:00.640273] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:55.122 [2024-12-15 13:27:00.640634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.122 [2024-12-15 13:27:00.640731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:55.122 [2024-12-15 13:27:00.640800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.122 [2024-12-15 13:27:00.640801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:56.055 13:27:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:56.055 13:27:01 -- common/autotest_common.sh@862 -- # return 0 00:12:56.055 13:27:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:56.055 13:27:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:56.055 13:27:01 -- common/autotest_common.sh@10 -- # set +x 00:12:56.055 13:27:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:56.055 13:27:01 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:56.055 13:27:01 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode28475 00:12:56.314 [2024-12-15 13:27:01.759434] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:56.314 13:27:01 -- target/invalid.sh@40 -- # out='2024/12/15 13:27:01 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode28475 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:56.314 request: 00:12:56.314 { 00:12:56.314 "method": "nvmf_create_subsystem", 00:12:56.314 "params": { 00:12:56.314 "nqn": "nqn.2016-06.io.spdk:cnode28475", 00:12:56.314 "tgt_name": "foobar" 00:12:56.314 } 00:12:56.314 } 00:12:56.314 Got JSON-RPC error response 00:12:56.314 GoRPCClient: error on JSON-RPC call' 00:12:56.314 13:27:01 -- target/invalid.sh@41 -- # [[ 2024/12/15 13:27:01 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode28475 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:56.314 request: 00:12:56.314 { 00:12:56.314 "method": "nvmf_create_subsystem", 00:12:56.314 "params": { 00:12:56.314 "nqn": "nqn.2016-06.io.spdk:cnode28475", 00:12:56.314 "tgt_name": "foobar" 00:12:56.314 } 00:12:56.314 } 00:12:56.314 Got JSON-RPC error response 00:12:56.314 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:56.314 13:27:01 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:56.314 13:27:01 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode2846 00:12:56.573 [2024-12-15 13:27:02.067752] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2846: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:56.573 13:27:02 -- target/invalid.sh@45 -- # out='2024/12/15 13:27:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode2846 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:56.573 request: 00:12:56.573 { 00:12:56.573 "method": "nvmf_create_subsystem", 00:12:56.573 "params": { 00:12:56.573 "nqn": "nqn.2016-06.io.spdk:cnode2846", 00:12:56.573 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:56.573 } 00:12:56.573 } 00:12:56.573 Got JSON-RPC error response 00:12:56.573 GoRPCClient: error on JSON-RPC call' 00:12:56.573 13:27:02 -- target/invalid.sh@46 -- # [[ 2024/12/15 13:27:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode2846 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:56.573 request: 00:12:56.573 { 00:12:56.573 "method": "nvmf_create_subsystem", 00:12:56.573 "params": { 00:12:56.573 "nqn": "nqn.2016-06.io.spdk:cnode2846", 00:12:56.573 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:56.573 } 00:12:56.573 } 00:12:56.573 Got JSON-RPC error response 00:12:56.573 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:56.573 13:27:02 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:56.573 13:27:02 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode24381 00:12:56.833 [2024-12-15 13:27:02.364012] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24381: invalid model number 'SPDK_Controller' 00:12:56.833 13:27:02 -- target/invalid.sh@50 -- # out='2024/12/15 13:27:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode24381], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:56.833 request: 00:12:56.833 { 00:12:56.833 "method": "nvmf_create_subsystem", 00:12:56.833 "params": { 00:12:56.833 "nqn": "nqn.2016-06.io.spdk:cnode24381", 00:12:56.833 "model_number": "SPDK_Controller\u001f" 00:12:56.833 } 00:12:56.833 } 00:12:56.833 Got JSON-RPC error response 00:12:56.833 GoRPCClient: error on JSON-RPC call' 00:12:56.833 13:27:02 -- target/invalid.sh@51 -- # [[ 2024/12/15 13:27:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode24381], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:56.833 request: 00:12:56.833 { 00:12:56.833 "method": "nvmf_create_subsystem", 00:12:56.833 "params": { 00:12:56.833 "nqn": "nqn.2016-06.io.spdk:cnode24381", 00:12:56.833 "model_number": "SPDK_Controller\u001f" 00:12:56.833 } 00:12:56.833 } 00:12:56.833 Got JSON-RPC error response 00:12:56.833 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:56.833 13:27:02 -- target/invalid.sh@54 -- # gen_random_s 21 00:12:56.833 13:27:02 -- target/invalid.sh@19 -- # local length=21 ll 00:12:56.833 13:27:02 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:56.833 13:27:02 -- target/invalid.sh@21 -- # local chars 00:12:56.833 13:27:02 -- target/invalid.sh@22 -- # local string 00:12:56.833 13:27:02 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:56.833 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # printf %x 97 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # string+=a 00:12:56.833 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.833 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # printf %x 119 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # string+=w 00:12:56.833 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.833 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # printf %x 46 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # string+=. 00:12:56.833 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.833 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # printf %x 78 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # string+=N 00:12:56.833 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.833 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # printf %x 89 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # string+=Y 00:12:56.833 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.833 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # printf %x 92 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # string+='\' 00:12:56.833 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.833 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # printf %x 59 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # string+=';' 00:12:56.833 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.833 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # printf %x 91 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # string+='[' 00:12:56.833 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.833 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # printf %x 102 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # string+=f 00:12:56.833 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.833 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # printf %x 75 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # string+=K 00:12:56.833 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.833 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # printf %x 65 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # string+=A 00:12:56.833 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.833 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # printf %x 65 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # string+=A 00:12:56.833 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.833 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # printf %x 60 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # string+='<' 00:12:56.833 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.833 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # printf %x 122 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # string+=z 00:12:56.833 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.833 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # printf %x 99 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # string+=c 00:12:56.833 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.833 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # printf %x 100 00:12:56.833 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:56.834 13:27:02 -- target/invalid.sh@25 -- # string+=d 00:12:56.834 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.834 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.834 13:27:02 -- target/invalid.sh@25 -- # printf %x 118 00:12:56.834 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:56.834 13:27:02 -- target/invalid.sh@25 -- # string+=v 00:12:56.834 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.834 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.834 13:27:02 -- target/invalid.sh@25 -- # printf %x 62 00:12:56.834 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:56.834 13:27:02 -- target/invalid.sh@25 -- # string+='>' 00:12:56.834 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.834 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.834 13:27:02 -- target/invalid.sh@25 -- # printf %x 40 00:12:56.834 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:56.834 13:27:02 -- target/invalid.sh@25 -- # string+='(' 00:12:56.834 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.834 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.834 13:27:02 -- target/invalid.sh@25 -- # printf %x 57 00:12:56.834 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:56.834 13:27:02 -- target/invalid.sh@25 -- # string+=9 00:12:56.834 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.834 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.834 13:27:02 -- target/invalid.sh@25 -- # printf %x 76 00:12:56.834 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:56.834 13:27:02 -- target/invalid.sh@25 -- # string+=L 00:12:56.834 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.834 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.834 13:27:02 -- target/invalid.sh@28 -- # [[ a == \- ]] 00:12:56.834 13:27:02 -- target/invalid.sh@31 -- # echo 'aw.NY\;[fKAA(9L' 00:12:56.834 13:27:02 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'aw.NY\;[fKAA(9L' nqn.2016-06.io.spdk:cnode7342 00:12:57.093 [2024-12-15 13:27:02.696352] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7342: invalid serial number 'aw.NY\;[fKAA(9L' 00:12:57.093 13:27:02 -- target/invalid.sh@54 -- # out='2024/12/15 13:27:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode7342 serial_number:aw.NY\;[fKAA(9L], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN aw.NY\;[fKAA(9L 00:12:57.093 request: 00:12:57.093 { 00:12:57.093 "method": "nvmf_create_subsystem", 00:12:57.093 "params": { 00:12:57.093 "nqn": "nqn.2016-06.io.spdk:cnode7342", 00:12:57.093 "serial_number": "aw.NY\\;[fKAA(9L" 00:12:57.093 } 00:12:57.093 } 00:12:57.093 Got JSON-RPC error response 00:12:57.093 GoRPCClient: error on JSON-RPC call' 00:12:57.093 13:27:02 -- target/invalid.sh@55 -- # [[ 2024/12/15 13:27:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode7342 serial_number:aw.NY\;[fKAA(9L], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN aw.NY\;[fKAA(9L 00:12:57.093 request: 00:12:57.093 { 00:12:57.093 "method": "nvmf_create_subsystem", 00:12:57.093 "params": { 00:12:57.093 "nqn": "nqn.2016-06.io.spdk:cnode7342", 00:12:57.093 "serial_number": "aw.NY\\;[fKAA(9L" 00:12:57.093 } 00:12:57.093 } 00:12:57.093 Got JSON-RPC error response 00:12:57.093 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:57.093 13:27:02 -- target/invalid.sh@58 -- # gen_random_s 41 00:12:57.093 13:27:02 -- target/invalid.sh@19 -- # local length=41 ll 00:12:57.093 13:27:02 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:57.093 13:27:02 -- target/invalid.sh@21 -- # local chars 00:12:57.093 13:27:02 -- target/invalid.sh@22 -- # local string 00:12:57.093 13:27:02 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:57.093 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.093 13:27:02 -- target/invalid.sh@25 -- # printf %x 88 00:12:57.093 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:57.093 13:27:02 -- target/invalid.sh@25 -- # string+=X 00:12:57.093 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.093 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.093 13:27:02 -- target/invalid.sh@25 -- # printf %x 100 00:12:57.093 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:57.094 13:27:02 -- target/invalid.sh@25 -- # string+=d 00:12:57.094 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.094 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.094 13:27:02 -- target/invalid.sh@25 -- # printf %x 43 00:12:57.094 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:57.094 13:27:02 -- target/invalid.sh@25 -- # string+=+ 00:12:57.094 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.094 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.094 13:27:02 -- target/invalid.sh@25 -- # printf %x 116 00:12:57.094 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:57.094 13:27:02 -- target/invalid.sh@25 -- # string+=t 00:12:57.094 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.094 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.094 13:27:02 -- target/invalid.sh@25 -- # printf %x 38 00:12:57.094 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:57.094 13:27:02 -- target/invalid.sh@25 -- # string+='&' 00:12:57.094 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.094 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.094 13:27:02 -- target/invalid.sh@25 -- # printf %x 116 00:12:57.094 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:57.094 13:27:02 -- target/invalid.sh@25 -- # string+=t 00:12:57.094 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.094 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.094 13:27:02 -- target/invalid.sh@25 -- # printf %x 90 00:12:57.094 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:57.094 13:27:02 -- target/invalid.sh@25 -- # string+=Z 00:12:57.094 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.094 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.094 13:27:02 -- target/invalid.sh@25 -- # printf %x 71 00:12:57.094 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:57.094 13:27:02 -- target/invalid.sh@25 -- # string+=G 00:12:57.094 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.094 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.094 13:27:02 -- target/invalid.sh@25 -- # printf %x 125 00:12:57.094 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:57.094 13:27:02 -- target/invalid.sh@25 -- # string+='}' 00:12:57.094 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.094 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.094 13:27:02 -- target/invalid.sh@25 -- # printf %x 32 00:12:57.094 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:57.094 13:27:02 -- target/invalid.sh@25 -- # string+=' ' 00:12:57.094 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.094 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.094 13:27:02 -- target/invalid.sh@25 -- # printf %x 94 00:12:57.094 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:57.094 13:27:02 -- target/invalid.sh@25 -- # string+='^' 00:12:57.094 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.094 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.094 13:27:02 -- target/invalid.sh@25 -- # printf %x 41 00:12:57.094 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:57.094 13:27:02 -- target/invalid.sh@25 -- # string+=')' 00:12:57.094 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.094 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.094 13:27:02 -- target/invalid.sh@25 -- # printf %x 64 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # string+=@ 00:12:57.353 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.353 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # printf %x 40 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # string+='(' 00:12:57.353 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.353 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # printf %x 75 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # string+=K 00:12:57.353 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.353 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # printf %x 67 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # string+=C 00:12:57.353 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.353 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # printf %x 113 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # string+=q 00:12:57.353 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.353 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # printf %x 76 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # string+=L 00:12:57.353 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.353 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # printf %x 37 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # string+=% 00:12:57.353 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.353 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # printf %x 44 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # string+=, 00:12:57.353 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.353 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # printf %x 73 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # string+=I 00:12:57.353 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.353 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # printf %x 86 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # string+=V 00:12:57.353 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.353 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # printf %x 82 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # string+=R 00:12:57.353 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.353 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # printf %x 125 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # string+='}' 00:12:57.353 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.353 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # printf %x 101 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # string+=e 00:12:57.353 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.353 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # printf %x 42 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # string+='*' 00:12:57.353 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.353 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # printf %x 72 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # string+=H 00:12:57.353 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.353 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # printf %x 107 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # string+=k 00:12:57.353 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.353 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.353 13:27:02 -- target/invalid.sh@25 -- # printf %x 97 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # string+=a 00:12:57.354 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.354 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # printf %x 123 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # string+='{' 00:12:57.354 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.354 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # printf %x 116 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # string+=t 00:12:57.354 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.354 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # printf %x 113 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # string+=q 00:12:57.354 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.354 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # printf %x 52 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # string+=4 00:12:57.354 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.354 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # printf %x 111 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # string+=o 00:12:57.354 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.354 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # printf %x 64 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # string+=@ 00:12:57.354 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.354 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # printf %x 50 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # string+=2 00:12:57.354 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.354 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # printf %x 78 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # string+=N 00:12:57.354 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.354 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # printf %x 99 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # string+=c 00:12:57.354 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.354 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # printf %x 37 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # string+=% 00:12:57.354 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.354 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # printf %x 120 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # string+=x 00:12:57.354 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.354 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # printf %x 101 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:57.354 13:27:02 -- target/invalid.sh@25 -- # string+=e 00:12:57.354 13:27:02 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:57.354 13:27:02 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:57.354 13:27:02 -- target/invalid.sh@28 -- # [[ X == \- ]] 00:12:57.354 13:27:02 -- target/invalid.sh@31 -- # echo 'Xd+t&tZG} ^)@(KCqL%,IVR}e*Hka{tq4o@2Nc%xe' 00:12:57.354 13:27:02 -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Xd+t&tZG} ^)@(KCqL%,IVR}e*Hka{tq4o@2Nc%xe' nqn.2016-06.io.spdk:cnode16923 00:12:57.613 [2024-12-15 13:27:03.196809] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16923: invalid model number 'Xd+t&tZG} ^)@(KCqL%,IVR}e*Hka{tq4o@2Nc%xe' 00:12:57.613 13:27:03 -- target/invalid.sh@58 -- # out='2024/12/15 13:27:03 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:Xd+t&tZG} ^)@(KCqL%,IVR}e*Hka{tq4o@2Nc%xe nqn:nqn.2016-06.io.spdk:cnode16923], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN Xd+t&tZG} ^)@(KCqL%,IVR}e*Hka{tq4o@2Nc%xe 00:12:57.613 request: 00:12:57.613 { 00:12:57.613 "method": "nvmf_create_subsystem", 00:12:57.613 "params": { 00:12:57.613 "nqn": "nqn.2016-06.io.spdk:cnode16923", 00:12:57.613 "model_number": "Xd+t&tZG} ^)@(KCqL%,IVR}e*Hka{tq4o@2Nc%xe" 00:12:57.613 } 00:12:57.613 } 00:12:57.613 Got JSON-RPC error response 00:12:57.613 GoRPCClient: error on JSON-RPC call' 00:12:57.613 13:27:03 -- target/invalid.sh@59 -- # [[ 2024/12/15 13:27:03 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:Xd+t&tZG} ^)@(KCqL%,IVR}e*Hka{tq4o@2Nc%xe nqn:nqn.2016-06.io.spdk:cnode16923], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN Xd+t&tZG} ^)@(KCqL%,IVR}e*Hka{tq4o@2Nc%xe 00:12:57.613 request: 00:12:57.613 { 00:12:57.613 "method": "nvmf_create_subsystem", 00:12:57.613 "params": { 00:12:57.613 "nqn": "nqn.2016-06.io.spdk:cnode16923", 00:12:57.613 "model_number": "Xd+t&tZG} ^)@(KCqL%,IVR}e*Hka{tq4o@2Nc%xe" 00:12:57.613 } 00:12:57.613 } 00:12:57.613 Got JSON-RPC error response 00:12:57.613 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:57.613 13:27:03 -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:57.871 [2024-12-15 13:27:03.461212] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:57.871 13:27:03 -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:58.128 13:27:03 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:58.128 13:27:03 -- target/invalid.sh@67 -- # echo '' 00:12:58.128 13:27:03 -- target/invalid.sh@67 -- # head -n 1 00:12:58.128 13:27:03 -- target/invalid.sh@67 -- # IP= 00:12:58.128 13:27:03 -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:58.386 [2024-12-15 13:27:04.066255] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:58.645 13:27:04 -- target/invalid.sh@69 -- # out='2024/12/15 13:27:04 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:58.645 request: 00:12:58.645 { 00:12:58.645 "method": "nvmf_subsystem_remove_listener", 00:12:58.645 "params": { 00:12:58.645 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:58.645 "listen_address": { 00:12:58.645 "trtype": "tcp", 00:12:58.645 "traddr": "", 00:12:58.645 "trsvcid": "4421" 00:12:58.645 } 00:12:58.645 } 00:12:58.645 } 00:12:58.645 Got JSON-RPC error response 00:12:58.645 GoRPCClient: error on JSON-RPC call' 00:12:58.645 13:27:04 -- target/invalid.sh@70 -- # [[ 2024/12/15 13:27:04 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:58.645 request: 00:12:58.645 { 00:12:58.645 "method": "nvmf_subsystem_remove_listener", 00:12:58.645 "params": { 00:12:58.645 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:58.645 "listen_address": { 00:12:58.645 "trtype": "tcp", 00:12:58.645 "traddr": "", 00:12:58.645 "trsvcid": "4421" 00:12:58.645 } 00:12:58.645 } 00:12:58.645 } 00:12:58.645 Got JSON-RPC error response 00:12:58.645 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:58.645 13:27:04 -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29771 -i 0 00:12:58.904 [2024-12-15 13:27:04.354422] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29771: invalid cntlid range [0-65519] 00:12:58.904 13:27:04 -- target/invalid.sh@73 -- # out='2024/12/15 13:27:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode29771], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:58.904 request: 00:12:58.904 { 00:12:58.904 "method": "nvmf_create_subsystem", 00:12:58.904 "params": { 00:12:58.904 "nqn": "nqn.2016-06.io.spdk:cnode29771", 00:12:58.904 "min_cntlid": 0 00:12:58.904 } 00:12:58.904 } 00:12:58.904 Got JSON-RPC error response 00:12:58.904 GoRPCClient: error on JSON-RPC call' 00:12:58.904 13:27:04 -- target/invalid.sh@74 -- # [[ 2024/12/15 13:27:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode29771], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:58.904 request: 00:12:58.904 { 00:12:58.904 "method": "nvmf_create_subsystem", 00:12:58.904 "params": { 00:12:58.904 "nqn": "nqn.2016-06.io.spdk:cnode29771", 00:12:58.904 "min_cntlid": 0 00:12:58.904 } 00:12:58.904 } 00:12:58.904 Got JSON-RPC error response 00:12:58.904 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:58.904 13:27:04 -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10307 -i 65520 00:12:59.162 [2024-12-15 13:27:04.658726] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10307: invalid cntlid range [65520-65519] 00:12:59.162 13:27:04 -- target/invalid.sh@75 -- # out='2024/12/15 13:27:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode10307], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:59.162 request: 00:12:59.162 { 00:12:59.162 "method": "nvmf_create_subsystem", 00:12:59.162 "params": { 00:12:59.162 "nqn": "nqn.2016-06.io.spdk:cnode10307", 00:12:59.162 "min_cntlid": 65520 00:12:59.162 } 00:12:59.162 } 00:12:59.162 Got JSON-RPC error response 00:12:59.162 GoRPCClient: error on JSON-RPC call' 00:12:59.162 13:27:04 -- target/invalid.sh@76 -- # [[ 2024/12/15 13:27:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode10307], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:59.162 request: 00:12:59.162 { 00:12:59.162 "method": "nvmf_create_subsystem", 00:12:59.162 "params": { 00:12:59.162 "nqn": "nqn.2016-06.io.spdk:cnode10307", 00:12:59.162 "min_cntlid": 65520 00:12:59.162 } 00:12:59.162 } 00:12:59.162 Got JSON-RPC error response 00:12:59.162 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:59.162 13:27:04 -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6023 -I 0 00:12:59.421 [2024-12-15 13:27:04.935016] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6023: invalid cntlid range [1-0] 00:12:59.421 13:27:04 -- target/invalid.sh@77 -- # out='2024/12/15 13:27:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode6023], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:59.421 request: 00:12:59.421 { 00:12:59.421 "method": "nvmf_create_subsystem", 00:12:59.421 "params": { 00:12:59.421 "nqn": "nqn.2016-06.io.spdk:cnode6023", 00:12:59.421 "max_cntlid": 0 00:12:59.421 } 00:12:59.421 } 00:12:59.421 Got JSON-RPC error response 00:12:59.421 GoRPCClient: error on JSON-RPC call' 00:12:59.421 13:27:04 -- target/invalid.sh@78 -- # [[ 2024/12/15 13:27:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode6023], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:59.421 request: 00:12:59.421 { 00:12:59.421 "method": "nvmf_create_subsystem", 00:12:59.421 "params": { 00:12:59.421 "nqn": "nqn.2016-06.io.spdk:cnode6023", 00:12:59.421 "max_cntlid": 0 00:12:59.421 } 00:12:59.421 } 00:12:59.421 Got JSON-RPC error response 00:12:59.421 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:59.421 13:27:04 -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4050 -I 65520 00:12:59.679 [2024-12-15 13:27:05.159178] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4050: invalid cntlid range [1-65520] 00:12:59.679 13:27:05 -- target/invalid.sh@79 -- # out='2024/12/15 13:27:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode4050], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:59.679 request: 00:12:59.679 { 00:12:59.679 "method": "nvmf_create_subsystem", 00:12:59.679 "params": { 00:12:59.679 "nqn": "nqn.2016-06.io.spdk:cnode4050", 00:12:59.679 "max_cntlid": 65520 00:12:59.679 } 00:12:59.679 } 00:12:59.679 Got JSON-RPC error response 00:12:59.679 GoRPCClient: error on JSON-RPC call' 00:12:59.679 13:27:05 -- target/invalid.sh@80 -- # [[ 2024/12/15 13:27:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode4050], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:59.679 request: 00:12:59.679 { 00:12:59.679 "method": "nvmf_create_subsystem", 00:12:59.679 "params": { 00:12:59.679 "nqn": "nqn.2016-06.io.spdk:cnode4050", 00:12:59.679 "max_cntlid": 65520 00:12:59.679 } 00:12:59.679 } 00:12:59.679 Got JSON-RPC error response 00:12:59.679 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:59.679 13:27:05 -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24932 -i 6 -I 5 00:12:59.938 [2024-12-15 13:27:05.383370] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24932: invalid cntlid range [6-5] 00:12:59.938 13:27:05 -- target/invalid.sh@83 -- # out='2024/12/15 13:27:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode24932], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:59.938 request: 00:12:59.938 { 00:12:59.938 "method": "nvmf_create_subsystem", 00:12:59.938 "params": { 00:12:59.938 "nqn": "nqn.2016-06.io.spdk:cnode24932", 00:12:59.938 "min_cntlid": 6, 00:12:59.938 "max_cntlid": 5 00:12:59.938 } 00:12:59.938 } 00:12:59.938 Got JSON-RPC error response 00:12:59.938 GoRPCClient: error on JSON-RPC call' 00:12:59.938 13:27:05 -- target/invalid.sh@84 -- # [[ 2024/12/15 13:27:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode24932], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:59.938 request: 00:12:59.938 { 00:12:59.938 "method": "nvmf_create_subsystem", 00:12:59.938 "params": { 00:12:59.938 "nqn": "nqn.2016-06.io.spdk:cnode24932", 00:12:59.938 "min_cntlid": 6, 00:12:59.938 "max_cntlid": 5 00:12:59.938 } 00:12:59.938 } 00:12:59.938 Got JSON-RPC error response 00:12:59.938 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:59.938 13:27:05 -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:59.938 13:27:05 -- target/invalid.sh@87 -- # out='request: 00:12:59.938 { 00:12:59.938 "name": "foobar", 00:12:59.938 "method": "nvmf_delete_target", 00:12:59.938 "req_id": 1 00:12:59.938 } 00:12:59.938 Got JSON-RPC error response 00:12:59.938 response: 00:12:59.938 { 00:12:59.938 "code": -32602, 00:12:59.938 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:59.938 }' 00:12:59.938 13:27:05 -- target/invalid.sh@88 -- # [[ request: 00:12:59.938 { 00:12:59.938 "name": "foobar", 00:12:59.938 "method": "nvmf_delete_target", 00:12:59.938 "req_id": 1 00:12:59.938 } 00:12:59.938 Got JSON-RPC error response 00:12:59.938 response: 00:12:59.938 { 00:12:59.938 "code": -32602, 00:12:59.938 "message": "The specified target doesn't exist, cannot delete it." 00:12:59.938 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:59.938 13:27:05 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:59.938 13:27:05 -- target/invalid.sh@91 -- # nvmftestfini 00:12:59.938 13:27:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:59.938 13:27:05 -- nvmf/common.sh@116 -- # sync 00:12:59.938 13:27:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:59.938 13:27:05 -- nvmf/common.sh@119 -- # set +e 00:12:59.938 13:27:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:59.938 13:27:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:59.938 rmmod nvme_tcp 00:12:59.938 rmmod nvme_fabrics 00:12:59.938 rmmod nvme_keyring 00:13:00.197 13:27:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:00.197 13:27:05 -- nvmf/common.sh@123 -- # set -e 00:13:00.197 13:27:05 -- nvmf/common.sh@124 -- # return 0 00:13:00.197 13:27:05 -- nvmf/common.sh@477 -- # '[' -n 78494 ']' 00:13:00.197 13:27:05 -- nvmf/common.sh@478 -- # killprocess 78494 00:13:00.197 13:27:05 -- common/autotest_common.sh@936 -- # '[' -z 78494 ']' 00:13:00.197 13:27:05 -- common/autotest_common.sh@940 -- # kill -0 78494 00:13:00.197 13:27:05 -- common/autotest_common.sh@941 -- # uname 00:13:00.197 13:27:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:00.197 13:27:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78494 00:13:00.197 killing process with pid 78494 00:13:00.197 13:27:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:00.197 13:27:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:00.197 13:27:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78494' 00:13:00.197 13:27:05 -- common/autotest_common.sh@955 -- # kill 78494 00:13:00.197 13:27:05 -- common/autotest_common.sh@960 -- # wait 78494 00:13:00.197 13:27:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:00.197 13:27:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:00.197 13:27:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:00.197 13:27:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:00.197 13:27:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:00.197 13:27:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.197 13:27:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.197 13:27:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.456 13:27:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:00.456 ************************************ 00:13:00.456 END TEST nvmf_invalid 00:13:00.456 ************************************ 00:13:00.456 00:13:00.456 real 0m6.128s 00:13:00.456 user 0m24.395s 00:13:00.456 sys 0m1.297s 00:13:00.456 13:27:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:00.456 13:27:05 -- common/autotest_common.sh@10 -- # set +x 00:13:00.456 13:27:05 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:00.456 13:27:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:00.456 13:27:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:00.456 13:27:05 -- common/autotest_common.sh@10 -- # set +x 00:13:00.456 ************************************ 00:13:00.456 START TEST nvmf_abort 00:13:00.456 ************************************ 00:13:00.456 13:27:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:00.456 * Looking for test storage... 00:13:00.456 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:00.456 13:27:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:00.456 13:27:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:00.456 13:27:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:00.456 13:27:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:00.456 13:27:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:00.456 13:27:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:00.456 13:27:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:00.456 13:27:06 -- scripts/common.sh@335 -- # IFS=.-: 00:13:00.456 13:27:06 -- scripts/common.sh@335 -- # read -ra ver1 00:13:00.456 13:27:06 -- scripts/common.sh@336 -- # IFS=.-: 00:13:00.456 13:27:06 -- scripts/common.sh@336 -- # read -ra ver2 00:13:00.456 13:27:06 -- scripts/common.sh@337 -- # local 'op=<' 00:13:00.456 13:27:06 -- scripts/common.sh@339 -- # ver1_l=2 00:13:00.456 13:27:06 -- scripts/common.sh@340 -- # ver2_l=1 00:13:00.456 13:27:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:00.456 13:27:06 -- scripts/common.sh@343 -- # case "$op" in 00:13:00.456 13:27:06 -- scripts/common.sh@344 -- # : 1 00:13:00.456 13:27:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:00.456 13:27:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:00.456 13:27:06 -- scripts/common.sh@364 -- # decimal 1 00:13:00.456 13:27:06 -- scripts/common.sh@352 -- # local d=1 00:13:00.456 13:27:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:00.456 13:27:06 -- scripts/common.sh@354 -- # echo 1 00:13:00.456 13:27:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:00.456 13:27:06 -- scripts/common.sh@365 -- # decimal 2 00:13:00.456 13:27:06 -- scripts/common.sh@352 -- # local d=2 00:13:00.456 13:27:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:00.456 13:27:06 -- scripts/common.sh@354 -- # echo 2 00:13:00.456 13:27:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:00.456 13:27:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:00.456 13:27:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:00.456 13:27:06 -- scripts/common.sh@367 -- # return 0 00:13:00.456 13:27:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:00.456 13:27:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:00.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.456 --rc genhtml_branch_coverage=1 00:13:00.456 --rc genhtml_function_coverage=1 00:13:00.456 --rc genhtml_legend=1 00:13:00.456 --rc geninfo_all_blocks=1 00:13:00.456 --rc geninfo_unexecuted_blocks=1 00:13:00.456 00:13:00.456 ' 00:13:00.456 13:27:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:00.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.456 --rc genhtml_branch_coverage=1 00:13:00.456 --rc genhtml_function_coverage=1 00:13:00.456 --rc genhtml_legend=1 00:13:00.456 --rc geninfo_all_blocks=1 00:13:00.456 --rc geninfo_unexecuted_blocks=1 00:13:00.456 00:13:00.456 ' 00:13:00.456 13:27:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:00.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.456 --rc genhtml_branch_coverage=1 00:13:00.456 --rc genhtml_function_coverage=1 00:13:00.456 --rc genhtml_legend=1 00:13:00.456 --rc geninfo_all_blocks=1 00:13:00.456 --rc geninfo_unexecuted_blocks=1 00:13:00.456 00:13:00.456 ' 00:13:00.456 13:27:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:00.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.456 --rc genhtml_branch_coverage=1 00:13:00.456 --rc genhtml_function_coverage=1 00:13:00.456 --rc genhtml_legend=1 00:13:00.456 --rc geninfo_all_blocks=1 00:13:00.456 --rc geninfo_unexecuted_blocks=1 00:13:00.456 00:13:00.456 ' 00:13:00.456 13:27:06 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:00.456 13:27:06 -- nvmf/common.sh@7 -- # uname -s 00:13:00.456 13:27:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.456 13:27:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.456 13:27:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.456 13:27:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.456 13:27:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.456 13:27:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.456 13:27:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.456 13:27:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.456 13:27:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.456 13:27:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.456 13:27:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:13:00.456 13:27:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:13:00.456 13:27:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.456 13:27:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.456 13:27:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:00.456 13:27:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:00.456 13:27:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.456 13:27:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.456 13:27:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.456 13:27:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.456 13:27:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.715 13:27:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.715 13:27:06 -- paths/export.sh@5 -- # export PATH 00:13:00.715 13:27:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.715 13:27:06 -- nvmf/common.sh@46 -- # : 0 00:13:00.715 13:27:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:00.715 13:27:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:00.715 13:27:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:00.715 13:27:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.715 13:27:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.715 13:27:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:00.715 13:27:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:00.715 13:27:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:00.715 13:27:06 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:00.715 13:27:06 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:00.715 13:27:06 -- target/abort.sh@14 -- # nvmftestinit 00:13:00.715 13:27:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:00.715 13:27:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.715 13:27:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:00.715 13:27:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:00.715 13:27:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:00.715 13:27:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.715 13:27:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.715 13:27:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.715 13:27:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:00.715 13:27:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:00.715 13:27:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:00.715 13:27:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:00.715 13:27:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:00.715 13:27:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:00.715 13:27:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:00.715 13:27:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:00.715 13:27:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:00.715 13:27:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:00.715 13:27:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:00.715 13:27:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:00.715 13:27:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:00.715 13:27:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:00.715 13:27:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:00.715 13:27:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:00.715 13:27:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:00.715 13:27:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:00.715 13:27:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:00.715 13:27:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:00.715 Cannot find device "nvmf_tgt_br" 00:13:00.715 13:27:06 -- nvmf/common.sh@154 -- # true 00:13:00.715 13:27:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:00.715 Cannot find device "nvmf_tgt_br2" 00:13:00.715 13:27:06 -- nvmf/common.sh@155 -- # true 00:13:00.715 13:27:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:00.715 13:27:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:00.715 Cannot find device "nvmf_tgt_br" 00:13:00.715 13:27:06 -- nvmf/common.sh@157 -- # true 00:13:00.715 13:27:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:00.715 Cannot find device "nvmf_tgt_br2" 00:13:00.716 13:27:06 -- nvmf/common.sh@158 -- # true 00:13:00.716 13:27:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:00.716 13:27:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:00.716 13:27:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:00.716 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:00.716 13:27:06 -- nvmf/common.sh@161 -- # true 00:13:00.716 13:27:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:00.716 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:00.716 13:27:06 -- nvmf/common.sh@162 -- # true 00:13:00.716 13:27:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:00.716 13:27:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:00.716 13:27:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:00.716 13:27:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:00.716 13:27:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:00.716 13:27:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:00.716 13:27:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:00.716 13:27:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:00.716 13:27:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:00.716 13:27:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:00.716 13:27:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:00.716 13:27:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:00.716 13:27:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:00.975 13:27:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:00.975 13:27:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:00.975 13:27:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:00.975 13:27:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:00.975 13:27:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:00.975 13:27:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:00.975 13:27:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:00.975 13:27:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:00.975 13:27:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:00.975 13:27:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:00.975 13:27:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:00.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:00.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:13:00.975 00:13:00.975 --- 10.0.0.2 ping statistics --- 00:13:00.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.975 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:13:00.975 13:27:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:00.975 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:00.975 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:13:00.975 00:13:00.975 --- 10.0.0.3 ping statistics --- 00:13:00.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.975 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:13:00.975 13:27:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:00.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:00.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:13:00.975 00:13:00.975 --- 10.0.0.1 ping statistics --- 00:13:00.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.975 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:13:00.975 13:27:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:00.975 13:27:06 -- nvmf/common.sh@421 -- # return 0 00:13:00.975 13:27:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:00.975 13:27:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:00.975 13:27:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:00.975 13:27:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:00.975 13:27:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:00.975 13:27:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:00.975 13:27:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:00.975 13:27:06 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:00.975 13:27:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:00.975 13:27:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:00.975 13:27:06 -- common/autotest_common.sh@10 -- # set +x 00:13:00.975 13:27:06 -- nvmf/common.sh@469 -- # nvmfpid=79010 00:13:00.975 13:27:06 -- nvmf/common.sh@470 -- # waitforlisten 79010 00:13:00.976 13:27:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:00.976 13:27:06 -- common/autotest_common.sh@829 -- # '[' -z 79010 ']' 00:13:00.976 13:27:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.976 13:27:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:00.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.976 13:27:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.976 13:27:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:00.976 13:27:06 -- common/autotest_common.sh@10 -- # set +x 00:13:00.976 [2024-12-15 13:27:06.586102] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:00.976 [2024-12-15 13:27:06.586205] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:01.234 [2024-12-15 13:27:06.728824] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:01.235 [2024-12-15 13:27:06.784052] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:01.235 [2024-12-15 13:27:06.784197] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:01.235 [2024-12-15 13:27:06.784210] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:01.235 [2024-12-15 13:27:06.784218] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:01.235 [2024-12-15 13:27:06.784337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:01.235 [2024-12-15 13:27:06.785005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:01.235 [2024-12-15 13:27:06.785049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.170 13:27:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:02.170 13:27:07 -- common/autotest_common.sh@862 -- # return 0 00:13:02.170 13:27:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:02.170 13:27:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:02.170 13:27:07 -- common/autotest_common.sh@10 -- # set +x 00:13:02.170 13:27:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:02.170 13:27:07 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:02.170 13:27:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.170 13:27:07 -- common/autotest_common.sh@10 -- # set +x 00:13:02.170 [2024-12-15 13:27:07.657996] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:02.170 13:27:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.170 13:27:07 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:02.170 13:27:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.170 13:27:07 -- common/autotest_common.sh@10 -- # set +x 00:13:02.170 Malloc0 00:13:02.170 13:27:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.170 13:27:07 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:02.170 13:27:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.170 13:27:07 -- common/autotest_common.sh@10 -- # set +x 00:13:02.170 Delay0 00:13:02.170 13:27:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.170 13:27:07 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:02.170 13:27:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.170 13:27:07 -- common/autotest_common.sh@10 -- # set +x 00:13:02.170 13:27:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.170 13:27:07 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:02.170 13:27:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.170 13:27:07 -- common/autotest_common.sh@10 -- # set +x 00:13:02.170 13:27:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.170 13:27:07 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:02.170 13:27:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.170 13:27:07 -- common/autotest_common.sh@10 -- # set +x 00:13:02.170 [2024-12-15 13:27:07.733323] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.170 13:27:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.170 13:27:07 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:02.170 13:27:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.170 13:27:07 -- common/autotest_common.sh@10 -- # set +x 00:13:02.170 13:27:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.170 13:27:07 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:02.429 [2024-12-15 13:27:07.913262] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:04.332 Initializing NVMe Controllers 00:13:04.332 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:04.332 controller IO queue size 128 less than required 00:13:04.332 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:04.332 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:04.332 Initialization complete. Launching workers. 00:13:04.332 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 35618 00:13:04.332 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 35679, failed to submit 62 00:13:04.332 success 35618, unsuccess 61, failed 0 00:13:04.332 13:27:09 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:04.332 13:27:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.332 13:27:09 -- common/autotest_common.sh@10 -- # set +x 00:13:04.332 13:27:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.332 13:27:09 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:04.332 13:27:09 -- target/abort.sh@38 -- # nvmftestfini 00:13:04.332 13:27:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:04.332 13:27:09 -- nvmf/common.sh@116 -- # sync 00:13:04.332 13:27:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:04.332 13:27:10 -- nvmf/common.sh@119 -- # set +e 00:13:04.333 13:27:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:04.333 13:27:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:04.333 rmmod nvme_tcp 00:13:04.591 rmmod nvme_fabrics 00:13:04.591 rmmod nvme_keyring 00:13:04.591 13:27:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:04.591 13:27:10 -- nvmf/common.sh@123 -- # set -e 00:13:04.591 13:27:10 -- nvmf/common.sh@124 -- # return 0 00:13:04.591 13:27:10 -- nvmf/common.sh@477 -- # '[' -n 79010 ']' 00:13:04.591 13:27:10 -- nvmf/common.sh@478 -- # killprocess 79010 00:13:04.591 13:27:10 -- common/autotest_common.sh@936 -- # '[' -z 79010 ']' 00:13:04.591 13:27:10 -- common/autotest_common.sh@940 -- # kill -0 79010 00:13:04.591 13:27:10 -- common/autotest_common.sh@941 -- # uname 00:13:04.591 13:27:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:04.591 13:27:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79010 00:13:04.591 killing process with pid 79010 00:13:04.591 13:27:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:04.591 13:27:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:04.591 13:27:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79010' 00:13:04.591 13:27:10 -- common/autotest_common.sh@955 -- # kill 79010 00:13:04.591 13:27:10 -- common/autotest_common.sh@960 -- # wait 79010 00:13:04.850 13:27:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:04.850 13:27:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:04.850 13:27:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:04.850 13:27:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:04.850 13:27:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:04.850 13:27:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.850 13:27:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:04.850 13:27:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.850 13:27:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:04.850 00:13:04.850 real 0m4.404s 00:13:04.850 user 0m12.602s 00:13:04.850 sys 0m1.027s 00:13:04.850 13:27:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:04.850 13:27:10 -- common/autotest_common.sh@10 -- # set +x 00:13:04.850 ************************************ 00:13:04.850 END TEST nvmf_abort 00:13:04.850 ************************************ 00:13:04.850 13:27:10 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:04.850 13:27:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:04.850 13:27:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:04.850 13:27:10 -- common/autotest_common.sh@10 -- # set +x 00:13:04.850 ************************************ 00:13:04.850 START TEST nvmf_ns_hotplug_stress 00:13:04.850 ************************************ 00:13:04.850 13:27:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:04.850 * Looking for test storage... 00:13:04.850 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:04.850 13:27:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:04.850 13:27:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:04.850 13:27:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:05.110 13:27:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:05.110 13:27:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:05.110 13:27:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:05.110 13:27:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:05.110 13:27:10 -- scripts/common.sh@335 -- # IFS=.-: 00:13:05.110 13:27:10 -- scripts/common.sh@335 -- # read -ra ver1 00:13:05.110 13:27:10 -- scripts/common.sh@336 -- # IFS=.-: 00:13:05.110 13:27:10 -- scripts/common.sh@336 -- # read -ra ver2 00:13:05.110 13:27:10 -- scripts/common.sh@337 -- # local 'op=<' 00:13:05.110 13:27:10 -- scripts/common.sh@339 -- # ver1_l=2 00:13:05.110 13:27:10 -- scripts/common.sh@340 -- # ver2_l=1 00:13:05.110 13:27:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:05.110 13:27:10 -- scripts/common.sh@343 -- # case "$op" in 00:13:05.110 13:27:10 -- scripts/common.sh@344 -- # : 1 00:13:05.110 13:27:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:05.110 13:27:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:05.110 13:27:10 -- scripts/common.sh@364 -- # decimal 1 00:13:05.110 13:27:10 -- scripts/common.sh@352 -- # local d=1 00:13:05.110 13:27:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:05.110 13:27:10 -- scripts/common.sh@354 -- # echo 1 00:13:05.110 13:27:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:05.110 13:27:10 -- scripts/common.sh@365 -- # decimal 2 00:13:05.110 13:27:10 -- scripts/common.sh@352 -- # local d=2 00:13:05.110 13:27:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:05.110 13:27:10 -- scripts/common.sh@354 -- # echo 2 00:13:05.110 13:27:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:05.110 13:27:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:05.110 13:27:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:05.110 13:27:10 -- scripts/common.sh@367 -- # return 0 00:13:05.110 13:27:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:05.110 13:27:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:05.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.110 --rc genhtml_branch_coverage=1 00:13:05.110 --rc genhtml_function_coverage=1 00:13:05.110 --rc genhtml_legend=1 00:13:05.110 --rc geninfo_all_blocks=1 00:13:05.110 --rc geninfo_unexecuted_blocks=1 00:13:05.110 00:13:05.110 ' 00:13:05.110 13:27:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:05.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.110 --rc genhtml_branch_coverage=1 00:13:05.110 --rc genhtml_function_coverage=1 00:13:05.110 --rc genhtml_legend=1 00:13:05.110 --rc geninfo_all_blocks=1 00:13:05.110 --rc geninfo_unexecuted_blocks=1 00:13:05.110 00:13:05.110 ' 00:13:05.110 13:27:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:05.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.110 --rc genhtml_branch_coverage=1 00:13:05.110 --rc genhtml_function_coverage=1 00:13:05.110 --rc genhtml_legend=1 00:13:05.110 --rc geninfo_all_blocks=1 00:13:05.110 --rc geninfo_unexecuted_blocks=1 00:13:05.110 00:13:05.110 ' 00:13:05.110 13:27:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:05.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.110 --rc genhtml_branch_coverage=1 00:13:05.110 --rc genhtml_function_coverage=1 00:13:05.110 --rc genhtml_legend=1 00:13:05.110 --rc geninfo_all_blocks=1 00:13:05.110 --rc geninfo_unexecuted_blocks=1 00:13:05.110 00:13:05.110 ' 00:13:05.110 13:27:10 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:05.110 13:27:10 -- nvmf/common.sh@7 -- # uname -s 00:13:05.110 13:27:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:05.110 13:27:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:05.110 13:27:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:05.110 13:27:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:05.110 13:27:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:05.110 13:27:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:05.110 13:27:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:05.110 13:27:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:05.110 13:27:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:05.110 13:27:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:05.110 13:27:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:13:05.110 13:27:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:13:05.110 13:27:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:05.110 13:27:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:05.110 13:27:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:05.110 13:27:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:05.110 13:27:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:05.110 13:27:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:05.110 13:27:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:05.110 13:27:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.110 13:27:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.110 13:27:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.110 13:27:10 -- paths/export.sh@5 -- # export PATH 00:13:05.110 13:27:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.110 13:27:10 -- nvmf/common.sh@46 -- # : 0 00:13:05.110 13:27:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:05.110 13:27:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:05.110 13:27:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:05.110 13:27:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:05.110 13:27:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:05.110 13:27:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:05.110 13:27:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:05.110 13:27:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:05.110 13:27:10 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:05.110 13:27:10 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:05.110 13:27:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:05.110 13:27:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:05.110 13:27:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:05.110 13:27:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:05.110 13:27:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:05.110 13:27:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.110 13:27:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:05.110 13:27:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.110 13:27:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:05.110 13:27:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:05.110 13:27:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:05.110 13:27:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:05.110 13:27:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:05.110 13:27:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:05.110 13:27:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:05.110 13:27:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:05.110 13:27:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:05.110 13:27:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:05.111 13:27:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:05.111 13:27:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:05.111 13:27:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:05.111 13:27:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:05.111 13:27:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:05.111 13:27:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:05.111 13:27:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:05.111 13:27:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:05.111 13:27:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:05.111 13:27:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:05.111 Cannot find device "nvmf_tgt_br" 00:13:05.111 13:27:10 -- nvmf/common.sh@154 -- # true 00:13:05.111 13:27:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:05.111 Cannot find device "nvmf_tgt_br2" 00:13:05.111 13:27:10 -- nvmf/common.sh@155 -- # true 00:13:05.111 13:27:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:05.111 13:27:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:05.111 Cannot find device "nvmf_tgt_br" 00:13:05.111 13:27:10 -- nvmf/common.sh@157 -- # true 00:13:05.111 13:27:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:05.111 Cannot find device "nvmf_tgt_br2" 00:13:05.111 13:27:10 -- nvmf/common.sh@158 -- # true 00:13:05.111 13:27:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:05.111 13:27:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:05.111 13:27:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:05.111 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:05.111 13:27:10 -- nvmf/common.sh@161 -- # true 00:13:05.111 13:27:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:05.111 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:05.111 13:27:10 -- nvmf/common.sh@162 -- # true 00:13:05.111 13:27:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:05.111 13:27:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:05.111 13:27:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:05.111 13:27:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:05.111 13:27:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:05.111 13:27:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:05.111 13:27:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:05.111 13:27:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:05.111 13:27:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:05.111 13:27:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:05.111 13:27:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:05.111 13:27:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:05.111 13:27:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:05.111 13:27:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:05.370 13:27:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:05.370 13:27:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:05.370 13:27:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:05.370 13:27:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:05.370 13:27:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:05.370 13:27:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:05.370 13:27:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:05.370 13:27:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:05.370 13:27:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:05.370 13:27:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:05.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:05.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:13:05.370 00:13:05.370 --- 10.0.0.2 ping statistics --- 00:13:05.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.370 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:13:05.370 13:27:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:05.370 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:05.370 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:13:05.370 00:13:05.370 --- 10.0.0.3 ping statistics --- 00:13:05.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.370 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:13:05.370 13:27:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:05.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:05.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.015 ms 00:13:05.370 00:13:05.370 --- 10.0.0.1 ping statistics --- 00:13:05.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.370 rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms 00:13:05.370 13:27:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:05.370 13:27:10 -- nvmf/common.sh@421 -- # return 0 00:13:05.370 13:27:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:05.370 13:27:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:05.370 13:27:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:05.370 13:27:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:05.370 13:27:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:05.370 13:27:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:05.370 13:27:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:05.370 13:27:10 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:05.370 13:27:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:05.370 13:27:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:05.370 13:27:10 -- common/autotest_common.sh@10 -- # set +x 00:13:05.370 13:27:10 -- nvmf/common.sh@469 -- # nvmfpid=79279 00:13:05.370 13:27:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:05.370 13:27:10 -- nvmf/common.sh@470 -- # waitforlisten 79279 00:13:05.370 13:27:10 -- common/autotest_common.sh@829 -- # '[' -z 79279 ']' 00:13:05.370 13:27:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.370 13:27:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:05.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.370 13:27:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.370 13:27:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:05.370 13:27:10 -- common/autotest_common.sh@10 -- # set +x 00:13:05.370 [2024-12-15 13:27:10.958097] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:05.370 [2024-12-15 13:27:10.958347] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:05.629 [2024-12-15 13:27:11.098313] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:05.629 [2024-12-15 13:27:11.152700] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:05.629 [2024-12-15 13:27:11.152843] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:05.629 [2024-12-15 13:27:11.152855] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:05.629 [2024-12-15 13:27:11.152862] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:05.629 [2024-12-15 13:27:11.153363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:05.629 [2024-12-15 13:27:11.153670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:05.629 [2024-12-15 13:27:11.153672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.565 13:27:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:06.565 13:27:11 -- common/autotest_common.sh@862 -- # return 0 00:13:06.565 13:27:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:06.565 13:27:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:06.565 13:27:11 -- common/autotest_common.sh@10 -- # set +x 00:13:06.565 13:27:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:06.565 13:27:12 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:06.565 13:27:12 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:06.824 [2024-12-15 13:27:12.311390] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:06.824 13:27:12 -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:07.083 13:27:12 -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.341 [2024-12-15 13:27:12.807923] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.341 13:27:12 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:07.600 13:27:13 -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:07.600 Malloc0 00:13:07.858 13:27:13 -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:07.858 Delay0 00:13:07.858 13:27:13 -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:08.117 13:27:13 -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:08.375 NULL1 00:13:08.375 13:27:13 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:08.634 13:27:14 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=79410 00:13:08.634 13:27:14 -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:08.634 13:27:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79410 00:13:08.634 13:27:14 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.009 Read completed with error (sct=0, sc=11) 00:13:10.009 13:27:15 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:10.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:10.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:10.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:10.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:10.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:10.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:10.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:10.271 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:10.271 13:27:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:10.271 13:27:15 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:10.271 true 00:13:10.271 13:27:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79410 00:13:10.271 13:27:15 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.209 13:27:16 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:11.467 13:27:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:11.468 13:27:17 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:11.726 true 00:13:11.726 13:27:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79410 00:13:11.726 13:27:17 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.985 13:27:17 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:12.243 13:27:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:12.243 13:27:17 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:12.243 true 00:13:12.502 13:27:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79410 00:13:12.502 13:27:17 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.438 13:27:18 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:13.438 13:27:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:13.438 13:27:18 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:13.697 true 00:13:13.697 13:27:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79410 00:13:13.697 13:27:19 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.955 13:27:19 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:14.214 13:27:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:14.214 13:27:19 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:14.472 true 00:13:14.472 13:27:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79410 00:13:14.472 13:27:19 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.405 13:27:20 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:15.405 13:27:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:15.405 13:27:21 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:15.664 true 00:13:15.664 13:27:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79410 00:13:15.664 13:27:21 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.941 13:27:21 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:16.230 13:27:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:16.230 13:27:21 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:16.489 true 00:13:16.489 13:27:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79410 00:13:16.489 13:27:21 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.424 13:27:22 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.424 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:17.424 13:27:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:17.424 13:27:23 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:17.683 true 00:13:17.683 13:27:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79410 00:13:17.683 13:27:23 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.941 13:27:23 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:18.200 13:27:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:18.200 13:27:23 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:18.458 true 00:13:18.458 13:27:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79410 00:13:18.458 13:27:23 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.394 13:27:24 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.394 13:27:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:19.394 13:27:25 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:19.652 true 00:13:19.652 13:27:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79410 00:13:19.652 13:27:25 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.910 13:27:25 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:20.168 13:27:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:20.168 13:27:25 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:20.427 true 00:13:20.427 13:27:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79410 00:13:20.427 13:27:26 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.363 13:27:26 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.622 13:27:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:21.622 13:27:27 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:21.622 true 00:13:21.622 13:27:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79410 00:13:21.622 13:27:27 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.880 13:27:27 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:22.448 13:27:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:22.448 13:27:27 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:22.448 true 00:13:22.448 13:27:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79410 00:13:22.448 13:27:28 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.707 13:27:28 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:22.965 13:27:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:22.965 13:27:28 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:23.224 true 00:13:23.224 13:27:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79410 00:13:23.224 13:27:28 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.160 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:24.160 13:27:29 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:24.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:24.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:24.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:24.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:24.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:24.419 13:27:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:24.419 13:27:30 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:24.677 true 00:13:24.677 13:27:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79410 00:13:24.677 13:27:30 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.613 13:27:31 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.613 13:27:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:25.613 13:27:31 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:25.872 true 00:13:25.872 13:27:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79410 00:13:25.872 13:27:31 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.131 13:27:31 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.389 13:27:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:26.389 13:27:31 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:26.648 true 00:13:26.648 13:27:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79410 00:13:26.648 13:27:32 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.584 13:27:33 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.842 13:27:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:27.842 13:27:33 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:28.101 true 00:13:28.101 13:27:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79410 00:13:28.101 13:27:33 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.360 13:27:33 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.618 13:27:34 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:28.618 13:27:34 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:28.876 true 00:13:28.876 13:27:34 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79410 00:13:28.876 13:27:34 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.134 13:27:34 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.393 13:27:34 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:29.393 13:27:34 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:29.651 true 00:13:29.651 13:27:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79410 00:13:29.651 13:27:35 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.585 13:27:36 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.843 13:27:36 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:30.843 13:27:36 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:31.101 true 00:13:31.101 13:27:36 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79410 00:13:31.101 13:27:36 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.360 13:27:36 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.618 13:27:37 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:31.618 13:27:37 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:31.618 true 00:13:31.618 13:27:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79410 00:13:31.618 13:27:37 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.553 13:27:38 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.812 13:27:38 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:32.812 13:27:38 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:33.071 true 00:13:33.071 13:27:38 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79410 00:13:33.071 13:27:38 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.330 13:27:38 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.588 13:27:39 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:33.588 13:27:39 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:33.588 true 00:13:33.588 13:27:39 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79410 00:13:33.589 13:27:39 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.527 13:27:40 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.804 13:27:40 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:34.804 13:27:40 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:35.078 true 00:13:35.078 13:27:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79410 00:13:35.078 13:27:40 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.337 13:27:40 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.597 13:27:41 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:35.597 13:27:41 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:35.597 true 00:13:35.597 13:27:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79410 00:13:35.597 13:27:41 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.534 13:27:42 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.792 13:27:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:36.793 13:27:42 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:37.051 true 00:13:37.051 13:27:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79410 00:13:37.051 13:27:42 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.310 13:27:42 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.569 13:27:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:37.569 13:27:43 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:37.828 true 00:13:37.828 13:27:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79410 00:13:37.828 13:27:43 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.764 13:27:44 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:38.764 13:27:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:38.764 13:27:44 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:39.023 Initializing NVMe Controllers 00:13:39.023 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:39.023 Controller IO queue size 128, less than required. 00:13:39.023 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:39.023 Controller IO queue size 128, less than required. 00:13:39.023 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:39.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:39.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:39.023 Initialization complete. Launching workers. 00:13:39.023 ======================================================== 00:13:39.023 Latency(us) 00:13:39.023 Device Information : IOPS MiB/s Average min max 00:13:39.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 633.85 0.31 110160.75 1832.95 1096805.88 00:13:39.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 13945.88 6.81 9178.00 1627.22 576193.81 00:13:39.023 ======================================================== 00:13:39.023 Total : 14579.73 7.12 13568.23 1627.22 1096805.88 00:13:39.023 00:13:39.023 true 00:13:39.023 13:27:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79410 00:13:39.023 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (79410) - No such process 00:13:39.023 13:27:44 -- target/ns_hotplug_stress.sh@53 -- # wait 79410 00:13:39.023 13:27:44 -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.282 13:27:44 -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:39.546 13:27:45 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:39.546 13:27:45 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:39.546 13:27:45 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:39.546 13:27:45 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:39.546 13:27:45 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:39.805 null0 00:13:39.805 13:27:45 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:39.805 13:27:45 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:39.805 13:27:45 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:39.805 null1 00:13:40.063 13:27:45 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:40.063 13:27:45 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:40.063 13:27:45 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:40.063 null2 00:13:40.063 13:27:45 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:40.063 13:27:45 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:40.063 13:27:45 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:40.322 null3 00:13:40.322 13:27:45 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:40.322 13:27:45 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:40.322 13:27:45 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:40.581 null4 00:13:40.581 13:27:46 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:40.581 13:27:46 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:40.581 13:27:46 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:40.839 null5 00:13:40.839 13:27:46 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:40.839 13:27:46 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:40.839 13:27:46 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:41.098 null6 00:13:41.098 13:27:46 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:41.098 13:27:46 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:41.098 13:27:46 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:41.357 null7 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:41.357 13:27:46 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:41.358 13:27:46 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:41.358 13:27:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.358 13:27:46 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:41.358 13:27:46 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:41.358 13:27:46 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:41.358 13:27:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.358 13:27:46 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:41.358 13:27:46 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:41.358 13:27:46 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:41.358 13:27:46 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:41.358 13:27:46 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:41.358 13:27:46 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:41.358 13:27:46 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:41.358 13:27:46 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:41.358 13:27:46 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:41.358 13:27:46 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:41.358 13:27:46 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:41.358 13:27:46 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:41.358 13:27:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.358 13:27:46 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:41.358 13:27:46 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:41.358 13:27:46 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:41.358 13:27:46 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:41.358 13:27:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.358 13:27:46 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:41.358 13:27:46 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:41.358 13:27:46 -- target/ns_hotplug_stress.sh@66 -- # wait 80437 80439 80440 80442 80444 80446 80449 80450 00:13:41.617 13:27:47 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:41.617 13:27:47 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.617 13:27:47 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:41.617 13:27:47 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:41.617 13:27:47 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:41.876 13:27:47 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:41.876 13:27:47 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:41.876 13:27:47 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:41.876 13:27:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.876 13:27:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.876 13:27:47 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:41.876 13:27:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.876 13:27:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.876 13:27:47 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:41.876 13:27:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.876 13:27:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.876 13:27:47 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:41.876 13:27:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.876 13:27:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.876 13:27:47 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:42.134 13:27:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.134 13:27:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.134 13:27:47 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:42.134 13:27:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.134 13:27:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.134 13:27:47 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:42.134 13:27:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.134 13:27:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.134 13:27:47 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:42.135 13:27:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.135 13:27:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.135 13:27:47 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:42.135 13:27:47 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:42.135 13:27:47 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.135 13:27:47 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:42.393 13:27:47 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:42.393 13:27:47 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:42.393 13:27:47 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:42.393 13:27:47 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:42.393 13:27:47 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:42.393 13:27:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.393 13:27:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.393 13:27:48 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:42.393 13:27:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.393 13:27:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.393 13:27:48 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:42.653 13:27:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.653 13:27:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.653 13:27:48 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:42.653 13:27:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.653 13:27:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.653 13:27:48 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:42.653 13:27:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.653 13:27:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.653 13:27:48 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:42.653 13:27:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.653 13:27:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.653 13:27:48 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:42.653 13:27:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.653 13:27:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.653 13:27:48 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:42.653 13:27:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.653 13:27:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.653 13:27:48 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:42.653 13:27:48 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:42.910 13:27:48 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:42.910 13:27:48 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:42.910 13:27:48 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:42.910 13:27:48 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.910 13:27:48 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:42.910 13:27:48 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:42.910 13:27:48 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:42.910 13:27:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.910 13:27:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.910 13:27:48 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:43.169 13:27:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.169 13:27:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.169 13:27:48 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:43.169 13:27:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.169 13:27:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.169 13:27:48 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:43.169 13:27:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.169 13:27:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.169 13:27:48 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:43.169 13:27:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.169 13:27:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.169 13:27:48 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:43.169 13:27:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.169 13:27:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.169 13:27:48 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:43.169 13:27:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.169 13:27:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.169 13:27:48 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:43.169 13:27:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.169 13:27:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.169 13:27:48 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:43.427 13:27:48 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:43.427 13:27:48 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:43.428 13:27:48 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:43.428 13:27:48 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:43.428 13:27:48 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:43.428 13:27:48 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.428 13:27:49 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:43.428 13:27:49 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:43.686 13:27:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.686 13:27:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.686 13:27:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:43.686 13:27:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.686 13:27:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.686 13:27:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:43.686 13:27:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.686 13:27:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.686 13:27:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:43.686 13:27:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.686 13:27:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.686 13:27:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:43.686 13:27:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.687 13:27:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.687 13:27:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:43.687 13:27:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.687 13:27:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.687 13:27:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:43.687 13:27:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.687 13:27:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.687 13:27:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:43.687 13:27:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.687 13:27:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.687 13:27:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:43.945 13:27:49 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:43.945 13:27:49 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:43.945 13:27:49 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:43.945 13:27:49 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:43.945 13:27:49 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:43.946 13:27:49 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:43.946 13:27:49 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.204 13:27:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.204 13:27:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.204 13:27:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:44.204 13:27:49 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:44.204 13:27:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.204 13:27:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.204 13:27:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:44.204 13:27:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.204 13:27:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.204 13:27:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:44.204 13:27:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.205 13:27:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.205 13:27:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:44.205 13:27:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.205 13:27:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.205 13:27:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:44.205 13:27:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.205 13:27:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.205 13:27:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:44.205 13:27:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.205 13:27:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.205 13:27:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:44.463 13:27:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.463 13:27:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.463 13:27:49 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:44.463 13:27:49 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:44.463 13:27:49 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:44.463 13:27:49 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:44.463 13:27:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:44.463 13:27:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:44.463 13:27:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.722 13:27:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:44.722 13:27:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:44.722 13:27:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.722 13:27:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.722 13:27:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:44.722 13:27:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.722 13:27:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.722 13:27:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:44.722 13:27:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.722 13:27:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.722 13:27:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:44.722 13:27:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.722 13:27:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.722 13:27:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:44.722 13:27:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.722 13:27:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.722 13:27:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:44.722 13:27:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.722 13:27:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.723 13:27:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:44.981 13:27:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.981 13:27:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.981 13:27:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:44.981 13:27:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.981 13:27:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.981 13:27:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:44.981 13:27:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:44.981 13:27:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:44.981 13:27:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:44.981 13:27:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:44.981 13:27:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:44.981 13:27:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.240 13:27:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:45.240 13:27:50 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:45.240 13:27:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.240 13:27:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.240 13:27:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:45.240 13:27:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.240 13:27:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.240 13:27:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:45.240 13:27:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.240 13:27:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.240 13:27:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:45.240 13:27:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.240 13:27:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.240 13:27:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:45.240 13:27:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.240 13:27:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.240 13:27:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:45.240 13:27:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.240 13:27:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.240 13:27:50 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:45.499 13:27:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.499 13:27:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.499 13:27:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:45.499 13:27:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.499 13:27:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.499 13:27:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:45.499 13:27:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:45.499 13:27:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:45.499 13:27:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:45.499 13:27:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:45.499 13:27:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:45.758 13:27:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.758 13:27:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:45.758 13:27:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:45.758 13:27:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.758 13:27:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.758 13:27:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:45.758 13:27:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.758 13:27:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.758 13:27:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:45.758 13:27:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.758 13:27:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.758 13:27:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:45.758 13:27:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.758 13:27:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.758 13:27:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:46.017 13:27:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.017 13:27:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.017 13:27:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:46.017 13:27:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.017 13:27:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.017 13:27:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:46.017 13:27:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.017 13:27:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.017 13:27:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:46.017 13:27:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.017 13:27:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.017 13:27:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:46.017 13:27:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:46.017 13:27:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:46.017 13:27:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:46.275 13:27:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:46.275 13:27:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:46.275 13:27:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.275 13:27:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:46.275 13:27:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.275 13:27:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.275 13:27:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:46.275 13:27:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.275 13:27:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.275 13:27:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:46.275 13:27:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:46.275 13:27:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.275 13:27:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.275 13:27:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:46.275 13:27:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.275 13:27:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.275 13:27:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:46.275 13:27:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.275 13:27:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.275 13:27:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:46.534 13:27:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.534 13:27:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.534 13:27:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:46.534 13:27:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.534 13:27:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.534 13:27:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:46.534 13:27:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:46.534 13:27:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.534 13:27:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.534 13:27:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:46.534 13:27:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:46.534 13:27:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:46.534 13:27:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:46.793 13:27:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:46.793 13:27:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.793 13:27:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.793 13:27:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.793 13:27:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:46.793 13:27:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:46.793 13:27:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.793 13:27:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:46.793 13:27:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:46.793 13:27:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.052 13:27:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.052 13:27:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.052 13:27:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.052 13:27:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.052 13:27:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.052 13:27:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.052 13:27:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.052 13:27:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.052 13:27:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:47.052 13:27:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:47.052 13:27:52 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:47.052 13:27:52 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:47.052 13:27:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:47.052 13:27:52 -- nvmf/common.sh@116 -- # sync 00:13:47.052 13:27:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:47.052 13:27:52 -- nvmf/common.sh@119 -- # set +e 00:13:47.052 13:27:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:47.052 13:27:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:47.052 rmmod nvme_tcp 00:13:47.052 rmmod nvme_fabrics 00:13:47.052 rmmod nvme_keyring 00:13:47.052 13:27:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:47.052 13:27:52 -- nvmf/common.sh@123 -- # set -e 00:13:47.052 13:27:52 -- nvmf/common.sh@124 -- # return 0 00:13:47.052 13:27:52 -- nvmf/common.sh@477 -- # '[' -n 79279 ']' 00:13:47.052 13:27:52 -- nvmf/common.sh@478 -- # killprocess 79279 00:13:47.052 13:27:52 -- common/autotest_common.sh@936 -- # '[' -z 79279 ']' 00:13:47.052 13:27:52 -- common/autotest_common.sh@940 -- # kill -0 79279 00:13:47.052 13:27:52 -- common/autotest_common.sh@941 -- # uname 00:13:47.052 13:27:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:47.052 13:27:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79279 00:13:47.311 13:27:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:47.311 killing process with pid 79279 00:13:47.311 13:27:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:47.311 13:27:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79279' 00:13:47.311 13:27:52 -- common/autotest_common.sh@955 -- # kill 79279 00:13:47.311 13:27:52 -- common/autotest_common.sh@960 -- # wait 79279 00:13:47.311 13:27:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:47.311 13:27:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:47.311 13:27:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:47.311 13:27:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:47.311 13:27:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:47.311 13:27:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.311 13:27:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.311 13:27:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.311 13:27:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:47.311 00:13:47.311 real 0m42.588s 00:13:47.311 user 3m25.566s 00:13:47.311 sys 0m12.015s 00:13:47.311 13:27:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:47.311 13:27:52 -- common/autotest_common.sh@10 -- # set +x 00:13:47.311 ************************************ 00:13:47.311 END TEST nvmf_ns_hotplug_stress 00:13:47.311 ************************************ 00:13:47.570 13:27:53 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:47.570 13:27:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:47.570 13:27:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:47.570 13:27:53 -- common/autotest_common.sh@10 -- # set +x 00:13:47.570 ************************************ 00:13:47.570 START TEST nvmf_connect_stress 00:13:47.570 ************************************ 00:13:47.570 13:27:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:47.570 * Looking for test storage... 00:13:47.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:47.570 13:27:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:47.570 13:27:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:47.570 13:27:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:47.570 13:27:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:47.570 13:27:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:47.570 13:27:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:47.570 13:27:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:47.570 13:27:53 -- scripts/common.sh@335 -- # IFS=.-: 00:13:47.570 13:27:53 -- scripts/common.sh@335 -- # read -ra ver1 00:13:47.570 13:27:53 -- scripts/common.sh@336 -- # IFS=.-: 00:13:47.570 13:27:53 -- scripts/common.sh@336 -- # read -ra ver2 00:13:47.570 13:27:53 -- scripts/common.sh@337 -- # local 'op=<' 00:13:47.570 13:27:53 -- scripts/common.sh@339 -- # ver1_l=2 00:13:47.570 13:27:53 -- scripts/common.sh@340 -- # ver2_l=1 00:13:47.570 13:27:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:47.570 13:27:53 -- scripts/common.sh@343 -- # case "$op" in 00:13:47.570 13:27:53 -- scripts/common.sh@344 -- # : 1 00:13:47.570 13:27:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:47.570 13:27:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:47.570 13:27:53 -- scripts/common.sh@364 -- # decimal 1 00:13:47.570 13:27:53 -- scripts/common.sh@352 -- # local d=1 00:13:47.570 13:27:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:47.570 13:27:53 -- scripts/common.sh@354 -- # echo 1 00:13:47.570 13:27:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:47.570 13:27:53 -- scripts/common.sh@365 -- # decimal 2 00:13:47.570 13:27:53 -- scripts/common.sh@352 -- # local d=2 00:13:47.570 13:27:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:47.570 13:27:53 -- scripts/common.sh@354 -- # echo 2 00:13:47.570 13:27:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:47.570 13:27:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:47.570 13:27:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:47.570 13:27:53 -- scripts/common.sh@367 -- # return 0 00:13:47.570 13:27:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:47.570 13:27:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:47.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.570 --rc genhtml_branch_coverage=1 00:13:47.570 --rc genhtml_function_coverage=1 00:13:47.570 --rc genhtml_legend=1 00:13:47.570 --rc geninfo_all_blocks=1 00:13:47.570 --rc geninfo_unexecuted_blocks=1 00:13:47.570 00:13:47.570 ' 00:13:47.570 13:27:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:47.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.570 --rc genhtml_branch_coverage=1 00:13:47.570 --rc genhtml_function_coverage=1 00:13:47.570 --rc genhtml_legend=1 00:13:47.570 --rc geninfo_all_blocks=1 00:13:47.570 --rc geninfo_unexecuted_blocks=1 00:13:47.570 00:13:47.570 ' 00:13:47.570 13:27:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:47.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.570 --rc genhtml_branch_coverage=1 00:13:47.570 --rc genhtml_function_coverage=1 00:13:47.570 --rc genhtml_legend=1 00:13:47.570 --rc geninfo_all_blocks=1 00:13:47.570 --rc geninfo_unexecuted_blocks=1 00:13:47.570 00:13:47.570 ' 00:13:47.570 13:27:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:47.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.570 --rc genhtml_branch_coverage=1 00:13:47.570 --rc genhtml_function_coverage=1 00:13:47.570 --rc genhtml_legend=1 00:13:47.570 --rc geninfo_all_blocks=1 00:13:47.570 --rc geninfo_unexecuted_blocks=1 00:13:47.570 00:13:47.570 ' 00:13:47.570 13:27:53 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:47.570 13:27:53 -- nvmf/common.sh@7 -- # uname -s 00:13:47.570 13:27:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:47.570 13:27:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:47.570 13:27:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:47.570 13:27:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:47.570 13:27:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:47.570 13:27:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:47.570 13:27:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:47.570 13:27:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:47.570 13:27:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:47.570 13:27:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:47.570 13:27:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:13:47.570 13:27:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:13:47.570 13:27:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:47.570 13:27:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:47.570 13:27:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:47.570 13:27:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:47.570 13:27:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:47.570 13:27:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:47.570 13:27:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:47.571 13:27:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.571 13:27:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.571 13:27:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.571 13:27:53 -- paths/export.sh@5 -- # export PATH 00:13:47.571 13:27:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.571 13:27:53 -- nvmf/common.sh@46 -- # : 0 00:13:47.571 13:27:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:47.571 13:27:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:47.571 13:27:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:47.571 13:27:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:47.571 13:27:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:47.571 13:27:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:47.571 13:27:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:47.571 13:27:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:47.571 13:27:53 -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:47.571 13:27:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:47.571 13:27:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:47.571 13:27:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:47.571 13:27:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:47.571 13:27:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:47.571 13:27:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.571 13:27:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.571 13:27:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.571 13:27:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:47.571 13:27:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:47.571 13:27:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:47.571 13:27:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:47.571 13:27:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:47.571 13:27:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:47.571 13:27:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:47.571 13:27:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:47.571 13:27:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:47.571 13:27:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:47.571 13:27:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:47.571 13:27:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:47.571 13:27:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:47.571 13:27:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:47.571 13:27:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:47.571 13:27:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:47.571 13:27:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:47.571 13:27:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:47.571 13:27:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:47.571 13:27:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:47.571 Cannot find device "nvmf_tgt_br" 00:13:47.571 13:27:53 -- nvmf/common.sh@154 -- # true 00:13:47.571 13:27:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:47.571 Cannot find device "nvmf_tgt_br2" 00:13:47.571 13:27:53 -- nvmf/common.sh@155 -- # true 00:13:47.571 13:27:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:47.571 13:27:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:47.830 Cannot find device "nvmf_tgt_br" 00:13:47.830 13:27:53 -- nvmf/common.sh@157 -- # true 00:13:47.830 13:27:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:47.830 Cannot find device "nvmf_tgt_br2" 00:13:47.830 13:27:53 -- nvmf/common.sh@158 -- # true 00:13:47.830 13:27:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:47.830 13:27:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:47.830 13:27:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:47.830 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:47.830 13:27:53 -- nvmf/common.sh@161 -- # true 00:13:47.830 13:27:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:47.830 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:47.830 13:27:53 -- nvmf/common.sh@162 -- # true 00:13:47.830 13:27:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:47.830 13:27:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:47.830 13:27:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:47.830 13:27:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:47.830 13:27:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:47.830 13:27:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:47.830 13:27:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:47.830 13:27:53 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:47.830 13:27:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:47.830 13:27:53 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:47.830 13:27:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:47.830 13:27:53 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:47.830 13:27:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:47.830 13:27:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:47.830 13:27:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:47.830 13:27:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:47.830 13:27:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:47.830 13:27:53 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:47.830 13:27:53 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:47.830 13:27:53 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:47.830 13:27:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:47.830 13:27:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:47.830 13:27:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:47.830 13:27:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:47.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:47.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:13:47.830 00:13:47.830 --- 10.0.0.2 ping statistics --- 00:13:47.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.830 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:13:47.830 13:27:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:47.830 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:47.830 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:13:47.830 00:13:47.830 --- 10.0.0.3 ping statistics --- 00:13:47.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.830 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:13:47.830 13:27:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:47.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:47.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:13:47.830 00:13:47.830 --- 10.0.0.1 ping statistics --- 00:13:47.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.830 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:13:47.830 13:27:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:47.830 13:27:53 -- nvmf/common.sh@421 -- # return 0 00:13:47.830 13:27:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:47.830 13:27:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:47.830 13:27:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:47.830 13:27:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:47.830 13:27:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:47.830 13:27:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:47.830 13:27:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:47.830 13:27:53 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:47.830 13:27:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:47.830 13:27:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:47.830 13:27:53 -- common/autotest_common.sh@10 -- # set +x 00:13:47.830 13:27:53 -- nvmf/common.sh@469 -- # nvmfpid=81775 00:13:47.830 13:27:53 -- nvmf/common.sh@470 -- # waitforlisten 81775 00:13:47.830 13:27:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:47.830 13:27:53 -- common/autotest_common.sh@829 -- # '[' -z 81775 ']' 00:13:47.830 13:27:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.830 13:27:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:47.830 13:27:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.830 13:27:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:47.830 13:27:53 -- common/autotest_common.sh@10 -- # set +x 00:13:48.089 [2024-12-15 13:27:53.549115] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:48.089 [2024-12-15 13:27:53.549197] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.089 [2024-12-15 13:27:53.675841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:48.089 [2024-12-15 13:27:53.738028] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:48.089 [2024-12-15 13:27:53.738171] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:48.089 [2024-12-15 13:27:53.738182] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:48.089 [2024-12-15 13:27:53.738190] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:48.089 [2024-12-15 13:27:53.738313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.089 [2024-12-15 13:27:53.738430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:48.089 [2024-12-15 13:27:53.738434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.025 13:27:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:49.025 13:27:54 -- common/autotest_common.sh@862 -- # return 0 00:13:49.025 13:27:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:49.025 13:27:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:49.025 13:27:54 -- common/autotest_common.sh@10 -- # set +x 00:13:49.025 13:27:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.025 13:27:54 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:49.025 13:27:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.025 13:27:54 -- common/autotest_common.sh@10 -- # set +x 00:13:49.025 [2024-12-15 13:27:54.536618] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:49.025 13:27:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.025 13:27:54 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:49.025 13:27:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.025 13:27:54 -- common/autotest_common.sh@10 -- # set +x 00:13:49.025 13:27:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.025 13:27:54 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:49.025 13:27:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.025 13:27:54 -- common/autotest_common.sh@10 -- # set +x 00:13:49.025 [2024-12-15 13:27:54.554461] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.025 13:27:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.025 13:27:54 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:49.025 13:27:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.025 13:27:54 -- common/autotest_common.sh@10 -- # set +x 00:13:49.025 NULL1 00:13:49.025 13:27:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.025 13:27:54 -- target/connect_stress.sh@21 -- # PERF_PID=81827 00:13:49.025 13:27:54 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:49.025 13:27:54 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:49.026 13:27:54 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:49.026 13:27:54 -- target/connect_stress.sh@27 -- # seq 1 20 00:13:49.026 13:27:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.026 13:27:54 -- target/connect_stress.sh@28 -- # cat 00:13:49.026 13:27:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.026 13:27:54 -- target/connect_stress.sh@28 -- # cat 00:13:49.026 13:27:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.026 13:27:54 -- target/connect_stress.sh@28 -- # cat 00:13:49.026 13:27:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.026 13:27:54 -- target/connect_stress.sh@28 -- # cat 00:13:49.026 13:27:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.026 13:27:54 -- target/connect_stress.sh@28 -- # cat 00:13:49.026 13:27:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.026 13:27:54 -- target/connect_stress.sh@28 -- # cat 00:13:49.026 13:27:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.026 13:27:54 -- target/connect_stress.sh@28 -- # cat 00:13:49.026 13:27:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.026 13:27:54 -- target/connect_stress.sh@28 -- # cat 00:13:49.026 13:27:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.026 13:27:54 -- target/connect_stress.sh@28 -- # cat 00:13:49.026 13:27:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.026 13:27:54 -- target/connect_stress.sh@28 -- # cat 00:13:49.026 13:27:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.026 13:27:54 -- target/connect_stress.sh@28 -- # cat 00:13:49.026 13:27:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.026 13:27:54 -- target/connect_stress.sh@28 -- # cat 00:13:49.026 13:27:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.026 13:27:54 -- target/connect_stress.sh@28 -- # cat 00:13:49.026 13:27:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.026 13:27:54 -- target/connect_stress.sh@28 -- # cat 00:13:49.026 13:27:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.026 13:27:54 -- target/connect_stress.sh@28 -- # cat 00:13:49.026 13:27:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.026 13:27:54 -- target/connect_stress.sh@28 -- # cat 00:13:49.026 13:27:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.026 13:27:54 -- target/connect_stress.sh@28 -- # cat 00:13:49.026 13:27:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.026 13:27:54 -- target/connect_stress.sh@28 -- # cat 00:13:49.026 13:27:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.026 13:27:54 -- target/connect_stress.sh@28 -- # cat 00:13:49.026 13:27:54 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:49.026 13:27:54 -- target/connect_stress.sh@28 -- # cat 00:13:49.026 13:27:54 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:49.026 13:27:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.026 13:27:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.026 13:27:54 -- common/autotest_common.sh@10 -- # set +x 00:13:49.592 13:27:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.592 13:27:54 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:49.592 13:27:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.592 13:27:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.592 13:27:54 -- common/autotest_common.sh@10 -- # set +x 00:13:49.850 13:27:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.850 13:27:55 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:49.850 13:27:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.850 13:27:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.850 13:27:55 -- common/autotest_common.sh@10 -- # set +x 00:13:50.108 13:27:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.108 13:27:55 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:50.108 13:27:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.108 13:27:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.108 13:27:55 -- common/autotest_common.sh@10 -- # set +x 00:13:50.366 13:27:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.366 13:27:55 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:50.366 13:27:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.366 13:27:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.366 13:27:55 -- common/autotest_common.sh@10 -- # set +x 00:13:50.625 13:27:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.625 13:27:56 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:50.625 13:27:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.625 13:27:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.625 13:27:56 -- common/autotest_common.sh@10 -- # set +x 00:13:51.192 13:27:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.192 13:27:56 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:51.192 13:27:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.192 13:27:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.192 13:27:56 -- common/autotest_common.sh@10 -- # set +x 00:13:51.451 13:27:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.451 13:27:56 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:51.451 13:27:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.451 13:27:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.451 13:27:56 -- common/autotest_common.sh@10 -- # set +x 00:13:51.710 13:27:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.710 13:27:57 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:51.710 13:27:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.710 13:27:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.710 13:27:57 -- common/autotest_common.sh@10 -- # set +x 00:13:51.969 13:27:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.969 13:27:57 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:51.969 13:27:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.969 13:27:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.969 13:27:57 -- common/autotest_common.sh@10 -- # set +x 00:13:52.228 13:27:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.228 13:27:57 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:52.228 13:27:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.228 13:27:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.228 13:27:57 -- common/autotest_common.sh@10 -- # set +x 00:13:52.795 13:27:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.795 13:27:58 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:52.795 13:27:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.795 13:27:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.795 13:27:58 -- common/autotest_common.sh@10 -- # set +x 00:13:53.055 13:27:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.055 13:27:58 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:53.055 13:27:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.055 13:27:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.055 13:27:58 -- common/autotest_common.sh@10 -- # set +x 00:13:53.314 13:27:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.314 13:27:58 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:53.314 13:27:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.314 13:27:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.314 13:27:58 -- common/autotest_common.sh@10 -- # set +x 00:13:53.573 13:27:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.573 13:27:59 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:53.573 13:27:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.573 13:27:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.573 13:27:59 -- common/autotest_common.sh@10 -- # set +x 00:13:53.831 13:27:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.831 13:27:59 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:53.831 13:27:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.831 13:27:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.831 13:27:59 -- common/autotest_common.sh@10 -- # set +x 00:13:54.399 13:27:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.399 13:27:59 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:54.399 13:27:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.399 13:27:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.399 13:27:59 -- common/autotest_common.sh@10 -- # set +x 00:13:54.657 13:28:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.657 13:28:00 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:54.657 13:28:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.657 13:28:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.657 13:28:00 -- common/autotest_common.sh@10 -- # set +x 00:13:54.916 13:28:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.916 13:28:00 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:54.916 13:28:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.916 13:28:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.916 13:28:00 -- common/autotest_common.sh@10 -- # set +x 00:13:55.174 13:28:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.174 13:28:00 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:55.175 13:28:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.175 13:28:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.175 13:28:00 -- common/autotest_common.sh@10 -- # set +x 00:13:55.433 13:28:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.433 13:28:01 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:55.433 13:28:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.433 13:28:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.433 13:28:01 -- common/autotest_common.sh@10 -- # set +x 00:13:55.999 13:28:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.999 13:28:01 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:55.999 13:28:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.999 13:28:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.999 13:28:01 -- common/autotest_common.sh@10 -- # set +x 00:13:56.259 13:28:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.259 13:28:01 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:56.259 13:28:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.259 13:28:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.259 13:28:01 -- common/autotest_common.sh@10 -- # set +x 00:13:56.517 13:28:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.517 13:28:02 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:56.517 13:28:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.517 13:28:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.517 13:28:02 -- common/autotest_common.sh@10 -- # set +x 00:13:56.781 13:28:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.781 13:28:02 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:56.781 13:28:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.781 13:28:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.781 13:28:02 -- common/autotest_common.sh@10 -- # set +x 00:13:57.064 13:28:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.064 13:28:02 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:57.064 13:28:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.064 13:28:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.064 13:28:02 -- common/autotest_common.sh@10 -- # set +x 00:13:57.645 13:28:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.645 13:28:03 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:57.645 13:28:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.645 13:28:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.645 13:28:03 -- common/autotest_common.sh@10 -- # set +x 00:13:57.903 13:28:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.903 13:28:03 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:57.903 13:28:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.903 13:28:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.903 13:28:03 -- common/autotest_common.sh@10 -- # set +x 00:13:58.162 13:28:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.162 13:28:03 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:58.162 13:28:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.162 13:28:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.162 13:28:03 -- common/autotest_common.sh@10 -- # set +x 00:13:58.421 13:28:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.421 13:28:03 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:58.421 13:28:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.421 13:28:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.421 13:28:03 -- common/autotest_common.sh@10 -- # set +x 00:13:58.680 13:28:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.680 13:28:04 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:58.680 13:28:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.680 13:28:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.680 13:28:04 -- common/autotest_common.sh@10 -- # set +x 00:13:59.248 13:28:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.248 13:28:04 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:59.248 13:28:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.248 13:28:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.248 13:28:04 -- common/autotest_common.sh@10 -- # set +x 00:13:59.248 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:59.507 13:28:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.507 13:28:04 -- target/connect_stress.sh@34 -- # kill -0 81827 00:13:59.507 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (81827) - No such process 00:13:59.507 13:28:04 -- target/connect_stress.sh@38 -- # wait 81827 00:13:59.507 13:28:04 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:59.507 13:28:04 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:59.507 13:28:04 -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:59.507 13:28:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:59.507 13:28:04 -- nvmf/common.sh@116 -- # sync 00:13:59.507 13:28:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:59.507 13:28:05 -- nvmf/common.sh@119 -- # set +e 00:13:59.507 13:28:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:59.508 13:28:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:59.508 rmmod nvme_tcp 00:13:59.508 rmmod nvme_fabrics 00:13:59.508 rmmod nvme_keyring 00:13:59.508 13:28:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:59.508 13:28:05 -- nvmf/common.sh@123 -- # set -e 00:13:59.508 13:28:05 -- nvmf/common.sh@124 -- # return 0 00:13:59.508 13:28:05 -- nvmf/common.sh@477 -- # '[' -n 81775 ']' 00:13:59.508 13:28:05 -- nvmf/common.sh@478 -- # killprocess 81775 00:13:59.508 13:28:05 -- common/autotest_common.sh@936 -- # '[' -z 81775 ']' 00:13:59.508 13:28:05 -- common/autotest_common.sh@940 -- # kill -0 81775 00:13:59.508 13:28:05 -- common/autotest_common.sh@941 -- # uname 00:13:59.508 13:28:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:59.508 13:28:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81775 00:13:59.508 killing process with pid 81775 00:13:59.508 13:28:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:59.508 13:28:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:59.508 13:28:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81775' 00:13:59.508 13:28:05 -- common/autotest_common.sh@955 -- # kill 81775 00:13:59.508 13:28:05 -- common/autotest_common.sh@960 -- # wait 81775 00:13:59.766 13:28:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:59.766 13:28:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:59.766 13:28:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:59.766 13:28:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:59.766 13:28:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:59.766 13:28:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.766 13:28:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:59.766 13:28:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.766 13:28:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:59.766 00:13:59.766 real 0m12.288s 00:13:59.766 user 0m41.342s 00:13:59.766 sys 0m3.142s 00:13:59.766 13:28:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:59.766 13:28:05 -- common/autotest_common.sh@10 -- # set +x 00:13:59.766 ************************************ 00:13:59.766 END TEST nvmf_connect_stress 00:13:59.766 ************************************ 00:13:59.766 13:28:05 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:59.766 13:28:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:59.766 13:28:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:59.766 13:28:05 -- common/autotest_common.sh@10 -- # set +x 00:13:59.766 ************************************ 00:13:59.766 START TEST nvmf_fused_ordering 00:13:59.766 ************************************ 00:13:59.766 13:28:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:59.766 * Looking for test storage... 00:13:59.766 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:59.766 13:28:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:59.766 13:28:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:59.766 13:28:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:00.025 13:28:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:00.025 13:28:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:00.025 13:28:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:00.025 13:28:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:00.025 13:28:05 -- scripts/common.sh@335 -- # IFS=.-: 00:14:00.025 13:28:05 -- scripts/common.sh@335 -- # read -ra ver1 00:14:00.025 13:28:05 -- scripts/common.sh@336 -- # IFS=.-: 00:14:00.025 13:28:05 -- scripts/common.sh@336 -- # read -ra ver2 00:14:00.025 13:28:05 -- scripts/common.sh@337 -- # local 'op=<' 00:14:00.025 13:28:05 -- scripts/common.sh@339 -- # ver1_l=2 00:14:00.025 13:28:05 -- scripts/common.sh@340 -- # ver2_l=1 00:14:00.025 13:28:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:00.025 13:28:05 -- scripts/common.sh@343 -- # case "$op" in 00:14:00.025 13:28:05 -- scripts/common.sh@344 -- # : 1 00:14:00.025 13:28:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:00.025 13:28:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:00.025 13:28:05 -- scripts/common.sh@364 -- # decimal 1 00:14:00.025 13:28:05 -- scripts/common.sh@352 -- # local d=1 00:14:00.025 13:28:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:00.025 13:28:05 -- scripts/common.sh@354 -- # echo 1 00:14:00.025 13:28:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:00.025 13:28:05 -- scripts/common.sh@365 -- # decimal 2 00:14:00.025 13:28:05 -- scripts/common.sh@352 -- # local d=2 00:14:00.025 13:28:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:00.025 13:28:05 -- scripts/common.sh@354 -- # echo 2 00:14:00.025 13:28:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:00.025 13:28:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:00.025 13:28:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:00.025 13:28:05 -- scripts/common.sh@367 -- # return 0 00:14:00.025 13:28:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:00.025 13:28:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:00.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.025 --rc genhtml_branch_coverage=1 00:14:00.025 --rc genhtml_function_coverage=1 00:14:00.025 --rc genhtml_legend=1 00:14:00.025 --rc geninfo_all_blocks=1 00:14:00.025 --rc geninfo_unexecuted_blocks=1 00:14:00.025 00:14:00.025 ' 00:14:00.025 13:28:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:00.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.025 --rc genhtml_branch_coverage=1 00:14:00.025 --rc genhtml_function_coverage=1 00:14:00.025 --rc genhtml_legend=1 00:14:00.025 --rc geninfo_all_blocks=1 00:14:00.025 --rc geninfo_unexecuted_blocks=1 00:14:00.025 00:14:00.025 ' 00:14:00.025 13:28:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:00.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.025 --rc genhtml_branch_coverage=1 00:14:00.025 --rc genhtml_function_coverage=1 00:14:00.025 --rc genhtml_legend=1 00:14:00.025 --rc geninfo_all_blocks=1 00:14:00.025 --rc geninfo_unexecuted_blocks=1 00:14:00.025 00:14:00.025 ' 00:14:00.025 13:28:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:00.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.025 --rc genhtml_branch_coverage=1 00:14:00.025 --rc genhtml_function_coverage=1 00:14:00.025 --rc genhtml_legend=1 00:14:00.025 --rc geninfo_all_blocks=1 00:14:00.025 --rc geninfo_unexecuted_blocks=1 00:14:00.025 00:14:00.025 ' 00:14:00.025 13:28:05 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:00.025 13:28:05 -- nvmf/common.sh@7 -- # uname -s 00:14:00.025 13:28:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.025 13:28:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.025 13:28:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.025 13:28:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.025 13:28:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.025 13:28:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.025 13:28:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.025 13:28:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.025 13:28:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.025 13:28:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.025 13:28:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:14:00.025 13:28:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:14:00.025 13:28:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.025 13:28:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.025 13:28:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:00.025 13:28:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:00.025 13:28:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.025 13:28:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.025 13:28:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.025 13:28:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.025 13:28:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.025 13:28:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.025 13:28:05 -- paths/export.sh@5 -- # export PATH 00:14:00.025 13:28:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.025 13:28:05 -- nvmf/common.sh@46 -- # : 0 00:14:00.025 13:28:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:00.025 13:28:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:00.025 13:28:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:00.025 13:28:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.025 13:28:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.025 13:28:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:00.025 13:28:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:00.025 13:28:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:00.025 13:28:05 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:00.025 13:28:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:00.025 13:28:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.025 13:28:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:00.025 13:28:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:00.025 13:28:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:00.025 13:28:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.025 13:28:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.025 13:28:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.025 13:28:05 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:00.025 13:28:05 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:00.025 13:28:05 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:00.025 13:28:05 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:00.025 13:28:05 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:00.026 13:28:05 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:00.026 13:28:05 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:00.026 13:28:05 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:00.026 13:28:05 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:00.026 13:28:05 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:00.026 13:28:05 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:00.026 13:28:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:00.026 13:28:05 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:00.026 13:28:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:00.026 13:28:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:00.026 13:28:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:00.026 13:28:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:00.026 13:28:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:00.026 13:28:05 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:00.026 13:28:05 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:00.026 Cannot find device "nvmf_tgt_br" 00:14:00.026 13:28:05 -- nvmf/common.sh@154 -- # true 00:14:00.026 13:28:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:00.026 Cannot find device "nvmf_tgt_br2" 00:14:00.026 13:28:05 -- nvmf/common.sh@155 -- # true 00:14:00.026 13:28:05 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:00.026 13:28:05 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:00.026 Cannot find device "nvmf_tgt_br" 00:14:00.026 13:28:05 -- nvmf/common.sh@157 -- # true 00:14:00.026 13:28:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:00.026 Cannot find device "nvmf_tgt_br2" 00:14:00.026 13:28:05 -- nvmf/common.sh@158 -- # true 00:14:00.026 13:28:05 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:00.026 13:28:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:00.026 13:28:05 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:00.026 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:00.026 13:28:05 -- nvmf/common.sh@161 -- # true 00:14:00.026 13:28:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:00.026 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:00.026 13:28:05 -- nvmf/common.sh@162 -- # true 00:14:00.026 13:28:05 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:00.026 13:28:05 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:00.026 13:28:05 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:00.026 13:28:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:00.285 13:28:05 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:00.285 13:28:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:00.285 13:28:05 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:00.285 13:28:05 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:00.285 13:28:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:00.285 13:28:05 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:00.285 13:28:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:00.285 13:28:05 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:00.285 13:28:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:00.285 13:28:05 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:00.285 13:28:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:00.285 13:28:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:00.285 13:28:05 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:00.285 13:28:05 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:00.285 13:28:05 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:00.285 13:28:05 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:00.285 13:28:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:00.285 13:28:05 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:00.285 13:28:05 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:00.285 13:28:05 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:00.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:00.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:14:00.285 00:14:00.285 --- 10.0.0.2 ping statistics --- 00:14:00.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.285 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:14:00.285 13:28:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:00.285 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:00.285 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:14:00.285 00:14:00.285 --- 10.0.0.3 ping statistics --- 00:14:00.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.285 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:14:00.285 13:28:05 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:00.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:00.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:00.285 00:14:00.285 --- 10.0.0.1 ping statistics --- 00:14:00.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.285 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:00.285 13:28:05 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:00.285 13:28:05 -- nvmf/common.sh@421 -- # return 0 00:14:00.285 13:28:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:00.285 13:28:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:00.285 13:28:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:00.285 13:28:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:00.285 13:28:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:00.285 13:28:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:00.285 13:28:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:00.285 13:28:05 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:00.285 13:28:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:00.285 13:28:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:00.285 13:28:05 -- common/autotest_common.sh@10 -- # set +x 00:14:00.285 13:28:05 -- nvmf/common.sh@469 -- # nvmfpid=82167 00:14:00.285 13:28:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:00.285 13:28:05 -- nvmf/common.sh@470 -- # waitforlisten 82167 00:14:00.285 13:28:05 -- common/autotest_common.sh@829 -- # '[' -z 82167 ']' 00:14:00.285 13:28:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.285 13:28:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:00.285 13:28:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.285 13:28:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:00.285 13:28:05 -- common/autotest_common.sh@10 -- # set +x 00:14:00.285 [2024-12-15 13:28:05.931444] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:00.285 [2024-12-15 13:28:05.931534] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.544 [2024-12-15 13:28:06.073929] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.544 [2024-12-15 13:28:06.127004] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:00.544 [2024-12-15 13:28:06.127145] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.544 [2024-12-15 13:28:06.127159] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.544 [2024-12-15 13:28:06.127167] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.544 [2024-12-15 13:28:06.127197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.480 13:28:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:01.480 13:28:06 -- common/autotest_common.sh@862 -- # return 0 00:14:01.480 13:28:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:01.480 13:28:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:01.480 13:28:06 -- common/autotest_common.sh@10 -- # set +x 00:14:01.480 13:28:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:01.480 13:28:07 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:01.480 13:28:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.480 13:28:07 -- common/autotest_common.sh@10 -- # set +x 00:14:01.480 [2024-12-15 13:28:07.017586] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.480 13:28:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.480 13:28:07 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:01.480 13:28:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.480 13:28:07 -- common/autotest_common.sh@10 -- # set +x 00:14:01.480 13:28:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.480 13:28:07 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:01.480 13:28:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.480 13:28:07 -- common/autotest_common.sh@10 -- # set +x 00:14:01.480 [2024-12-15 13:28:07.033716] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:01.480 13:28:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.480 13:28:07 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:01.480 13:28:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.480 13:28:07 -- common/autotest_common.sh@10 -- # set +x 00:14:01.480 NULL1 00:14:01.480 13:28:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.480 13:28:07 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:01.480 13:28:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.480 13:28:07 -- common/autotest_common.sh@10 -- # set +x 00:14:01.480 13:28:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.481 13:28:07 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:01.481 13:28:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.481 13:28:07 -- common/autotest_common.sh@10 -- # set +x 00:14:01.481 13:28:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.481 13:28:07 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:01.481 [2024-12-15 13:28:07.086165] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:01.481 [2024-12-15 13:28:07.086216] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82217 ] 00:14:02.049 Attached to nqn.2016-06.io.spdk:cnode1 00:14:02.049 Namespace ID: 1 size: 1GB 00:14:02.049 fused_ordering(0) 00:14:02.049 fused_ordering(1) 00:14:02.049 fused_ordering(2) 00:14:02.049 fused_ordering(3) 00:14:02.049 fused_ordering(4) 00:14:02.049 fused_ordering(5) 00:14:02.049 fused_ordering(6) 00:14:02.049 fused_ordering(7) 00:14:02.049 fused_ordering(8) 00:14:02.049 fused_ordering(9) 00:14:02.049 fused_ordering(10) 00:14:02.049 fused_ordering(11) 00:14:02.049 fused_ordering(12) 00:14:02.049 fused_ordering(13) 00:14:02.049 fused_ordering(14) 00:14:02.049 fused_ordering(15) 00:14:02.049 fused_ordering(16) 00:14:02.049 fused_ordering(17) 00:14:02.049 fused_ordering(18) 00:14:02.049 fused_ordering(19) 00:14:02.049 fused_ordering(20) 00:14:02.049 fused_ordering(21) 00:14:02.049 fused_ordering(22) 00:14:02.049 fused_ordering(23) 00:14:02.049 fused_ordering(24) 00:14:02.049 fused_ordering(25) 00:14:02.049 fused_ordering(26) 00:14:02.049 fused_ordering(27) 00:14:02.049 fused_ordering(28) 00:14:02.049 fused_ordering(29) 00:14:02.049 fused_ordering(30) 00:14:02.049 fused_ordering(31) 00:14:02.049 fused_ordering(32) 00:14:02.049 fused_ordering(33) 00:14:02.049 fused_ordering(34) 00:14:02.049 fused_ordering(35) 00:14:02.049 fused_ordering(36) 00:14:02.049 fused_ordering(37) 00:14:02.049 fused_ordering(38) 00:14:02.049 fused_ordering(39) 00:14:02.049 fused_ordering(40) 00:14:02.049 fused_ordering(41) 00:14:02.049 fused_ordering(42) 00:14:02.049 fused_ordering(43) 00:14:02.049 fused_ordering(44) 00:14:02.049 fused_ordering(45) 00:14:02.049 fused_ordering(46) 00:14:02.049 fused_ordering(47) 00:14:02.049 fused_ordering(48) 00:14:02.049 fused_ordering(49) 00:14:02.049 fused_ordering(50) 00:14:02.049 fused_ordering(51) 00:14:02.049 fused_ordering(52) 00:14:02.049 fused_ordering(53) 00:14:02.049 fused_ordering(54) 00:14:02.049 fused_ordering(55) 00:14:02.049 fused_ordering(56) 00:14:02.049 fused_ordering(57) 00:14:02.049 fused_ordering(58) 00:14:02.049 fused_ordering(59) 00:14:02.049 fused_ordering(60) 00:14:02.049 fused_ordering(61) 00:14:02.049 fused_ordering(62) 00:14:02.049 fused_ordering(63) 00:14:02.049 fused_ordering(64) 00:14:02.049 fused_ordering(65) 00:14:02.049 fused_ordering(66) 00:14:02.049 fused_ordering(67) 00:14:02.049 fused_ordering(68) 00:14:02.049 fused_ordering(69) 00:14:02.049 fused_ordering(70) 00:14:02.049 fused_ordering(71) 00:14:02.049 fused_ordering(72) 00:14:02.049 fused_ordering(73) 00:14:02.049 fused_ordering(74) 00:14:02.049 fused_ordering(75) 00:14:02.049 fused_ordering(76) 00:14:02.049 fused_ordering(77) 00:14:02.049 fused_ordering(78) 00:14:02.049 fused_ordering(79) 00:14:02.049 fused_ordering(80) 00:14:02.049 fused_ordering(81) 00:14:02.049 fused_ordering(82) 00:14:02.049 fused_ordering(83) 00:14:02.049 fused_ordering(84) 00:14:02.049 fused_ordering(85) 00:14:02.049 fused_ordering(86) 00:14:02.049 fused_ordering(87) 00:14:02.049 fused_ordering(88) 00:14:02.049 fused_ordering(89) 00:14:02.049 fused_ordering(90) 00:14:02.049 fused_ordering(91) 00:14:02.049 fused_ordering(92) 00:14:02.049 fused_ordering(93) 00:14:02.049 fused_ordering(94) 00:14:02.049 fused_ordering(95) 00:14:02.049 fused_ordering(96) 00:14:02.049 fused_ordering(97) 00:14:02.049 fused_ordering(98) 00:14:02.049 fused_ordering(99) 00:14:02.049 fused_ordering(100) 00:14:02.049 fused_ordering(101) 00:14:02.049 fused_ordering(102) 00:14:02.049 fused_ordering(103) 00:14:02.049 fused_ordering(104) 00:14:02.049 fused_ordering(105) 00:14:02.049 fused_ordering(106) 00:14:02.049 fused_ordering(107) 00:14:02.049 fused_ordering(108) 00:14:02.049 fused_ordering(109) 00:14:02.049 fused_ordering(110) 00:14:02.049 fused_ordering(111) 00:14:02.049 fused_ordering(112) 00:14:02.049 fused_ordering(113) 00:14:02.049 fused_ordering(114) 00:14:02.049 fused_ordering(115) 00:14:02.049 fused_ordering(116) 00:14:02.049 fused_ordering(117) 00:14:02.049 fused_ordering(118) 00:14:02.049 fused_ordering(119) 00:14:02.049 fused_ordering(120) 00:14:02.049 fused_ordering(121) 00:14:02.049 fused_ordering(122) 00:14:02.049 fused_ordering(123) 00:14:02.049 fused_ordering(124) 00:14:02.049 fused_ordering(125) 00:14:02.049 fused_ordering(126) 00:14:02.049 fused_ordering(127) 00:14:02.049 fused_ordering(128) 00:14:02.049 fused_ordering(129) 00:14:02.049 fused_ordering(130) 00:14:02.049 fused_ordering(131) 00:14:02.049 fused_ordering(132) 00:14:02.049 fused_ordering(133) 00:14:02.049 fused_ordering(134) 00:14:02.049 fused_ordering(135) 00:14:02.049 fused_ordering(136) 00:14:02.049 fused_ordering(137) 00:14:02.049 fused_ordering(138) 00:14:02.049 fused_ordering(139) 00:14:02.049 fused_ordering(140) 00:14:02.049 fused_ordering(141) 00:14:02.049 fused_ordering(142) 00:14:02.049 fused_ordering(143) 00:14:02.049 fused_ordering(144) 00:14:02.049 fused_ordering(145) 00:14:02.049 fused_ordering(146) 00:14:02.049 fused_ordering(147) 00:14:02.049 fused_ordering(148) 00:14:02.049 fused_ordering(149) 00:14:02.049 fused_ordering(150) 00:14:02.049 fused_ordering(151) 00:14:02.049 fused_ordering(152) 00:14:02.049 fused_ordering(153) 00:14:02.049 fused_ordering(154) 00:14:02.049 fused_ordering(155) 00:14:02.049 fused_ordering(156) 00:14:02.049 fused_ordering(157) 00:14:02.049 fused_ordering(158) 00:14:02.049 fused_ordering(159) 00:14:02.049 fused_ordering(160) 00:14:02.049 fused_ordering(161) 00:14:02.049 fused_ordering(162) 00:14:02.049 fused_ordering(163) 00:14:02.049 fused_ordering(164) 00:14:02.049 fused_ordering(165) 00:14:02.049 fused_ordering(166) 00:14:02.049 fused_ordering(167) 00:14:02.049 fused_ordering(168) 00:14:02.049 fused_ordering(169) 00:14:02.049 fused_ordering(170) 00:14:02.049 fused_ordering(171) 00:14:02.049 fused_ordering(172) 00:14:02.049 fused_ordering(173) 00:14:02.049 fused_ordering(174) 00:14:02.049 fused_ordering(175) 00:14:02.049 fused_ordering(176) 00:14:02.049 fused_ordering(177) 00:14:02.049 fused_ordering(178) 00:14:02.049 fused_ordering(179) 00:14:02.049 fused_ordering(180) 00:14:02.049 fused_ordering(181) 00:14:02.049 fused_ordering(182) 00:14:02.049 fused_ordering(183) 00:14:02.049 fused_ordering(184) 00:14:02.049 fused_ordering(185) 00:14:02.049 fused_ordering(186) 00:14:02.049 fused_ordering(187) 00:14:02.049 fused_ordering(188) 00:14:02.049 fused_ordering(189) 00:14:02.049 fused_ordering(190) 00:14:02.049 fused_ordering(191) 00:14:02.049 fused_ordering(192) 00:14:02.049 fused_ordering(193) 00:14:02.049 fused_ordering(194) 00:14:02.049 fused_ordering(195) 00:14:02.049 fused_ordering(196) 00:14:02.049 fused_ordering(197) 00:14:02.049 fused_ordering(198) 00:14:02.049 fused_ordering(199) 00:14:02.049 fused_ordering(200) 00:14:02.049 fused_ordering(201) 00:14:02.049 fused_ordering(202) 00:14:02.050 fused_ordering(203) 00:14:02.050 fused_ordering(204) 00:14:02.050 fused_ordering(205) 00:14:02.050 fused_ordering(206) 00:14:02.050 fused_ordering(207) 00:14:02.050 fused_ordering(208) 00:14:02.050 fused_ordering(209) 00:14:02.050 fused_ordering(210) 00:14:02.050 fused_ordering(211) 00:14:02.050 fused_ordering(212) 00:14:02.050 fused_ordering(213) 00:14:02.050 fused_ordering(214) 00:14:02.050 fused_ordering(215) 00:14:02.050 fused_ordering(216) 00:14:02.050 fused_ordering(217) 00:14:02.050 fused_ordering(218) 00:14:02.050 fused_ordering(219) 00:14:02.050 fused_ordering(220) 00:14:02.050 fused_ordering(221) 00:14:02.050 fused_ordering(222) 00:14:02.050 fused_ordering(223) 00:14:02.050 fused_ordering(224) 00:14:02.050 fused_ordering(225) 00:14:02.050 fused_ordering(226) 00:14:02.050 fused_ordering(227) 00:14:02.050 fused_ordering(228) 00:14:02.050 fused_ordering(229) 00:14:02.050 fused_ordering(230) 00:14:02.050 fused_ordering(231) 00:14:02.050 fused_ordering(232) 00:14:02.050 fused_ordering(233) 00:14:02.050 fused_ordering(234) 00:14:02.050 fused_ordering(235) 00:14:02.050 fused_ordering(236) 00:14:02.050 fused_ordering(237) 00:14:02.050 fused_ordering(238) 00:14:02.050 fused_ordering(239) 00:14:02.050 fused_ordering(240) 00:14:02.050 fused_ordering(241) 00:14:02.050 fused_ordering(242) 00:14:02.050 fused_ordering(243) 00:14:02.050 fused_ordering(244) 00:14:02.050 fused_ordering(245) 00:14:02.050 fused_ordering(246) 00:14:02.050 fused_ordering(247) 00:14:02.050 fused_ordering(248) 00:14:02.050 fused_ordering(249) 00:14:02.050 fused_ordering(250) 00:14:02.050 fused_ordering(251) 00:14:02.050 fused_ordering(252) 00:14:02.050 fused_ordering(253) 00:14:02.050 fused_ordering(254) 00:14:02.050 fused_ordering(255) 00:14:02.050 fused_ordering(256) 00:14:02.050 fused_ordering(257) 00:14:02.050 fused_ordering(258) 00:14:02.050 fused_ordering(259) 00:14:02.050 fused_ordering(260) 00:14:02.050 fused_ordering(261) 00:14:02.050 fused_ordering(262) 00:14:02.050 fused_ordering(263) 00:14:02.050 fused_ordering(264) 00:14:02.050 fused_ordering(265) 00:14:02.050 fused_ordering(266) 00:14:02.050 fused_ordering(267) 00:14:02.050 fused_ordering(268) 00:14:02.050 fused_ordering(269) 00:14:02.050 fused_ordering(270) 00:14:02.050 fused_ordering(271) 00:14:02.050 fused_ordering(272) 00:14:02.050 fused_ordering(273) 00:14:02.050 fused_ordering(274) 00:14:02.050 fused_ordering(275) 00:14:02.050 fused_ordering(276) 00:14:02.050 fused_ordering(277) 00:14:02.050 fused_ordering(278) 00:14:02.050 fused_ordering(279) 00:14:02.050 fused_ordering(280) 00:14:02.050 fused_ordering(281) 00:14:02.050 fused_ordering(282) 00:14:02.050 fused_ordering(283) 00:14:02.050 fused_ordering(284) 00:14:02.050 fused_ordering(285) 00:14:02.050 fused_ordering(286) 00:14:02.050 fused_ordering(287) 00:14:02.050 fused_ordering(288) 00:14:02.050 fused_ordering(289) 00:14:02.050 fused_ordering(290) 00:14:02.050 fused_ordering(291) 00:14:02.050 fused_ordering(292) 00:14:02.050 fused_ordering(293) 00:14:02.050 fused_ordering(294) 00:14:02.050 fused_ordering(295) 00:14:02.050 fused_ordering(296) 00:14:02.050 fused_ordering(297) 00:14:02.050 fused_ordering(298) 00:14:02.050 fused_ordering(299) 00:14:02.050 fused_ordering(300) 00:14:02.050 fused_ordering(301) 00:14:02.050 fused_ordering(302) 00:14:02.050 fused_ordering(303) 00:14:02.050 fused_ordering(304) 00:14:02.050 fused_ordering(305) 00:14:02.050 fused_ordering(306) 00:14:02.050 fused_ordering(307) 00:14:02.050 fused_ordering(308) 00:14:02.050 fused_ordering(309) 00:14:02.050 fused_ordering(310) 00:14:02.050 fused_ordering(311) 00:14:02.050 fused_ordering(312) 00:14:02.050 fused_ordering(313) 00:14:02.050 fused_ordering(314) 00:14:02.050 fused_ordering(315) 00:14:02.050 fused_ordering(316) 00:14:02.050 fused_ordering(317) 00:14:02.050 fused_ordering(318) 00:14:02.050 fused_ordering(319) 00:14:02.050 fused_ordering(320) 00:14:02.050 fused_ordering(321) 00:14:02.050 fused_ordering(322) 00:14:02.050 fused_ordering(323) 00:14:02.050 fused_ordering(324) 00:14:02.050 fused_ordering(325) 00:14:02.050 fused_ordering(326) 00:14:02.050 fused_ordering(327) 00:14:02.050 fused_ordering(328) 00:14:02.050 fused_ordering(329) 00:14:02.050 fused_ordering(330) 00:14:02.050 fused_ordering(331) 00:14:02.050 fused_ordering(332) 00:14:02.050 fused_ordering(333) 00:14:02.050 fused_ordering(334) 00:14:02.050 fused_ordering(335) 00:14:02.050 fused_ordering(336) 00:14:02.050 fused_ordering(337) 00:14:02.050 fused_ordering(338) 00:14:02.050 fused_ordering(339) 00:14:02.050 fused_ordering(340) 00:14:02.050 fused_ordering(341) 00:14:02.050 fused_ordering(342) 00:14:02.050 fused_ordering(343) 00:14:02.050 fused_ordering(344) 00:14:02.050 fused_ordering(345) 00:14:02.050 fused_ordering(346) 00:14:02.050 fused_ordering(347) 00:14:02.050 fused_ordering(348) 00:14:02.050 fused_ordering(349) 00:14:02.050 fused_ordering(350) 00:14:02.050 fused_ordering(351) 00:14:02.050 fused_ordering(352) 00:14:02.050 fused_ordering(353) 00:14:02.050 fused_ordering(354) 00:14:02.050 fused_ordering(355) 00:14:02.050 fused_ordering(356) 00:14:02.050 fused_ordering(357) 00:14:02.050 fused_ordering(358) 00:14:02.050 fused_ordering(359) 00:14:02.050 fused_ordering(360) 00:14:02.050 fused_ordering(361) 00:14:02.050 fused_ordering(362) 00:14:02.050 fused_ordering(363) 00:14:02.050 fused_ordering(364) 00:14:02.050 fused_ordering(365) 00:14:02.050 fused_ordering(366) 00:14:02.050 fused_ordering(367) 00:14:02.050 fused_ordering(368) 00:14:02.050 fused_ordering(369) 00:14:02.050 fused_ordering(370) 00:14:02.050 fused_ordering(371) 00:14:02.050 fused_ordering(372) 00:14:02.050 fused_ordering(373) 00:14:02.050 fused_ordering(374) 00:14:02.050 fused_ordering(375) 00:14:02.050 fused_ordering(376) 00:14:02.050 fused_ordering(377) 00:14:02.050 fused_ordering(378) 00:14:02.050 fused_ordering(379) 00:14:02.050 fused_ordering(380) 00:14:02.050 fused_ordering(381) 00:14:02.050 fused_ordering(382) 00:14:02.050 fused_ordering(383) 00:14:02.050 fused_ordering(384) 00:14:02.050 fused_ordering(385) 00:14:02.050 fused_ordering(386) 00:14:02.050 fused_ordering(387) 00:14:02.050 fused_ordering(388) 00:14:02.050 fused_ordering(389) 00:14:02.050 fused_ordering(390) 00:14:02.050 fused_ordering(391) 00:14:02.050 fused_ordering(392) 00:14:02.050 fused_ordering(393) 00:14:02.050 fused_ordering(394) 00:14:02.050 fused_ordering(395) 00:14:02.050 fused_ordering(396) 00:14:02.050 fused_ordering(397) 00:14:02.050 fused_ordering(398) 00:14:02.050 fused_ordering(399) 00:14:02.050 fused_ordering(400) 00:14:02.050 fused_ordering(401) 00:14:02.050 fused_ordering(402) 00:14:02.050 fused_ordering(403) 00:14:02.050 fused_ordering(404) 00:14:02.050 fused_ordering(405) 00:14:02.050 fused_ordering(406) 00:14:02.050 fused_ordering(407) 00:14:02.050 fused_ordering(408) 00:14:02.050 fused_ordering(409) 00:14:02.050 fused_ordering(410) 00:14:02.618 fused_ordering(411) 00:14:02.618 fused_ordering(412) 00:14:02.618 fused_ordering(413) 00:14:02.618 fused_ordering(414) 00:14:02.618 fused_ordering(415) 00:14:02.618 fused_ordering(416) 00:14:02.618 fused_ordering(417) 00:14:02.618 fused_ordering(418) 00:14:02.618 fused_ordering(419) 00:14:02.618 fused_ordering(420) 00:14:02.618 fused_ordering(421) 00:14:02.618 fused_ordering(422) 00:14:02.618 fused_ordering(423) 00:14:02.618 fused_ordering(424) 00:14:02.618 fused_ordering(425) 00:14:02.618 fused_ordering(426) 00:14:02.618 fused_ordering(427) 00:14:02.618 fused_ordering(428) 00:14:02.618 fused_ordering(429) 00:14:02.618 fused_ordering(430) 00:14:02.618 fused_ordering(431) 00:14:02.618 fused_ordering(432) 00:14:02.618 fused_ordering(433) 00:14:02.618 fused_ordering(434) 00:14:02.618 fused_ordering(435) 00:14:02.618 fused_ordering(436) 00:14:02.618 fused_ordering(437) 00:14:02.618 fused_ordering(438) 00:14:02.618 fused_ordering(439) 00:14:02.618 fused_ordering(440) 00:14:02.618 fused_ordering(441) 00:14:02.618 fused_ordering(442) 00:14:02.618 fused_ordering(443) 00:14:02.618 fused_ordering(444) 00:14:02.618 fused_ordering(445) 00:14:02.618 fused_ordering(446) 00:14:02.618 fused_ordering(447) 00:14:02.618 fused_ordering(448) 00:14:02.618 fused_ordering(449) 00:14:02.618 fused_ordering(450) 00:14:02.618 fused_ordering(451) 00:14:02.618 fused_ordering(452) 00:14:02.618 fused_ordering(453) 00:14:02.618 fused_ordering(454) 00:14:02.618 fused_ordering(455) 00:14:02.618 fused_ordering(456) 00:14:02.618 fused_ordering(457) 00:14:02.618 fused_ordering(458) 00:14:02.618 fused_ordering(459) 00:14:02.618 fused_ordering(460) 00:14:02.618 fused_ordering(461) 00:14:02.618 fused_ordering(462) 00:14:02.618 fused_ordering(463) 00:14:02.618 fused_ordering(464) 00:14:02.618 fused_ordering(465) 00:14:02.618 fused_ordering(466) 00:14:02.618 fused_ordering(467) 00:14:02.618 fused_ordering(468) 00:14:02.618 fused_ordering(469) 00:14:02.618 fused_ordering(470) 00:14:02.618 fused_ordering(471) 00:14:02.618 fused_ordering(472) 00:14:02.618 fused_ordering(473) 00:14:02.618 fused_ordering(474) 00:14:02.618 fused_ordering(475) 00:14:02.618 fused_ordering(476) 00:14:02.618 fused_ordering(477) 00:14:02.618 fused_ordering(478) 00:14:02.618 fused_ordering(479) 00:14:02.618 fused_ordering(480) 00:14:02.618 fused_ordering(481) 00:14:02.618 fused_ordering(482) 00:14:02.618 fused_ordering(483) 00:14:02.618 fused_ordering(484) 00:14:02.618 fused_ordering(485) 00:14:02.618 fused_ordering(486) 00:14:02.618 fused_ordering(487) 00:14:02.618 fused_ordering(488) 00:14:02.618 fused_ordering(489) 00:14:02.618 fused_ordering(490) 00:14:02.618 fused_ordering(491) 00:14:02.618 fused_ordering(492) 00:14:02.618 fused_ordering(493) 00:14:02.618 fused_ordering(494) 00:14:02.618 fused_ordering(495) 00:14:02.618 fused_ordering(496) 00:14:02.618 fused_ordering(497) 00:14:02.618 fused_ordering(498) 00:14:02.618 fused_ordering(499) 00:14:02.618 fused_ordering(500) 00:14:02.618 fused_ordering(501) 00:14:02.618 fused_ordering(502) 00:14:02.618 fused_ordering(503) 00:14:02.618 fused_ordering(504) 00:14:02.618 fused_ordering(505) 00:14:02.618 fused_ordering(506) 00:14:02.618 fused_ordering(507) 00:14:02.618 fused_ordering(508) 00:14:02.618 fused_ordering(509) 00:14:02.618 fused_ordering(510) 00:14:02.618 fused_ordering(511) 00:14:02.618 fused_ordering(512) 00:14:02.618 fused_ordering(513) 00:14:02.618 fused_ordering(514) 00:14:02.618 fused_ordering(515) 00:14:02.618 fused_ordering(516) 00:14:02.618 fused_ordering(517) 00:14:02.618 fused_ordering(518) 00:14:02.618 fused_ordering(519) 00:14:02.618 fused_ordering(520) 00:14:02.618 fused_ordering(521) 00:14:02.618 fused_ordering(522) 00:14:02.618 fused_ordering(523) 00:14:02.618 fused_ordering(524) 00:14:02.618 fused_ordering(525) 00:14:02.618 fused_ordering(526) 00:14:02.618 fused_ordering(527) 00:14:02.618 fused_ordering(528) 00:14:02.618 fused_ordering(529) 00:14:02.618 fused_ordering(530) 00:14:02.618 fused_ordering(531) 00:14:02.618 fused_ordering(532) 00:14:02.618 fused_ordering(533) 00:14:02.618 fused_ordering(534) 00:14:02.618 fused_ordering(535) 00:14:02.618 fused_ordering(536) 00:14:02.618 fused_ordering(537) 00:14:02.618 fused_ordering(538) 00:14:02.619 fused_ordering(539) 00:14:02.619 fused_ordering(540) 00:14:02.619 fused_ordering(541) 00:14:02.619 fused_ordering(542) 00:14:02.619 fused_ordering(543) 00:14:02.619 fused_ordering(544) 00:14:02.619 fused_ordering(545) 00:14:02.619 fused_ordering(546) 00:14:02.619 fused_ordering(547) 00:14:02.619 fused_ordering(548) 00:14:02.619 fused_ordering(549) 00:14:02.619 fused_ordering(550) 00:14:02.619 fused_ordering(551) 00:14:02.619 fused_ordering(552) 00:14:02.619 fused_ordering(553) 00:14:02.619 fused_ordering(554) 00:14:02.619 fused_ordering(555) 00:14:02.619 fused_ordering(556) 00:14:02.619 fused_ordering(557) 00:14:02.619 fused_ordering(558) 00:14:02.619 fused_ordering(559) 00:14:02.619 fused_ordering(560) 00:14:02.619 fused_ordering(561) 00:14:02.619 fused_ordering(562) 00:14:02.619 fused_ordering(563) 00:14:02.619 fused_ordering(564) 00:14:02.619 fused_ordering(565) 00:14:02.619 fused_ordering(566) 00:14:02.619 fused_ordering(567) 00:14:02.619 fused_ordering(568) 00:14:02.619 fused_ordering(569) 00:14:02.619 fused_ordering(570) 00:14:02.619 fused_ordering(571) 00:14:02.619 fused_ordering(572) 00:14:02.619 fused_ordering(573) 00:14:02.619 fused_ordering(574) 00:14:02.619 fused_ordering(575) 00:14:02.619 fused_ordering(576) 00:14:02.619 fused_ordering(577) 00:14:02.619 fused_ordering(578) 00:14:02.619 fused_ordering(579) 00:14:02.619 fused_ordering(580) 00:14:02.619 fused_ordering(581) 00:14:02.619 fused_ordering(582) 00:14:02.619 fused_ordering(583) 00:14:02.619 fused_ordering(584) 00:14:02.619 fused_ordering(585) 00:14:02.619 fused_ordering(586) 00:14:02.619 fused_ordering(587) 00:14:02.619 fused_ordering(588) 00:14:02.619 fused_ordering(589) 00:14:02.619 fused_ordering(590) 00:14:02.619 fused_ordering(591) 00:14:02.619 fused_ordering(592) 00:14:02.619 fused_ordering(593) 00:14:02.619 fused_ordering(594) 00:14:02.619 fused_ordering(595) 00:14:02.619 fused_ordering(596) 00:14:02.619 fused_ordering(597) 00:14:02.619 fused_ordering(598) 00:14:02.619 fused_ordering(599) 00:14:02.619 fused_ordering(600) 00:14:02.619 fused_ordering(601) 00:14:02.619 fused_ordering(602) 00:14:02.619 fused_ordering(603) 00:14:02.619 fused_ordering(604) 00:14:02.619 fused_ordering(605) 00:14:02.619 fused_ordering(606) 00:14:02.619 fused_ordering(607) 00:14:02.619 fused_ordering(608) 00:14:02.619 fused_ordering(609) 00:14:02.619 fused_ordering(610) 00:14:02.619 fused_ordering(611) 00:14:02.619 fused_ordering(612) 00:14:02.619 fused_ordering(613) 00:14:02.619 fused_ordering(614) 00:14:02.619 fused_ordering(615) 00:14:02.878 fused_ordering(616) 00:14:02.878 fused_ordering(617) 00:14:02.878 fused_ordering(618) 00:14:02.878 fused_ordering(619) 00:14:02.878 fused_ordering(620) 00:14:02.878 fused_ordering(621) 00:14:02.878 fused_ordering(622) 00:14:02.878 fused_ordering(623) 00:14:02.878 fused_ordering(624) 00:14:02.878 fused_ordering(625) 00:14:02.878 fused_ordering(626) 00:14:02.878 fused_ordering(627) 00:14:02.878 fused_ordering(628) 00:14:02.878 fused_ordering(629) 00:14:02.878 fused_ordering(630) 00:14:02.878 fused_ordering(631) 00:14:02.878 fused_ordering(632) 00:14:02.878 fused_ordering(633) 00:14:02.878 fused_ordering(634) 00:14:02.878 fused_ordering(635) 00:14:02.878 fused_ordering(636) 00:14:02.878 fused_ordering(637) 00:14:02.878 fused_ordering(638) 00:14:02.878 fused_ordering(639) 00:14:02.878 fused_ordering(640) 00:14:02.878 fused_ordering(641) 00:14:02.878 fused_ordering(642) 00:14:02.878 fused_ordering(643) 00:14:02.878 fused_ordering(644) 00:14:02.878 fused_ordering(645) 00:14:02.878 fused_ordering(646) 00:14:02.878 fused_ordering(647) 00:14:02.878 fused_ordering(648) 00:14:02.878 fused_ordering(649) 00:14:02.878 fused_ordering(650) 00:14:02.878 fused_ordering(651) 00:14:02.878 fused_ordering(652) 00:14:02.878 fused_ordering(653) 00:14:02.878 fused_ordering(654) 00:14:02.878 fused_ordering(655) 00:14:02.878 fused_ordering(656) 00:14:02.878 fused_ordering(657) 00:14:02.878 fused_ordering(658) 00:14:02.878 fused_ordering(659) 00:14:02.878 fused_ordering(660) 00:14:02.878 fused_ordering(661) 00:14:02.878 fused_ordering(662) 00:14:02.878 fused_ordering(663) 00:14:02.878 fused_ordering(664) 00:14:02.878 fused_ordering(665) 00:14:02.878 fused_ordering(666) 00:14:02.878 fused_ordering(667) 00:14:02.878 fused_ordering(668) 00:14:02.878 fused_ordering(669) 00:14:02.878 fused_ordering(670) 00:14:02.878 fused_ordering(671) 00:14:02.878 fused_ordering(672) 00:14:02.878 fused_ordering(673) 00:14:02.878 fused_ordering(674) 00:14:02.878 fused_ordering(675) 00:14:02.878 fused_ordering(676) 00:14:02.878 fused_ordering(677) 00:14:02.878 fused_ordering(678) 00:14:02.878 fused_ordering(679) 00:14:02.878 fused_ordering(680) 00:14:02.878 fused_ordering(681) 00:14:02.878 fused_ordering(682) 00:14:02.878 fused_ordering(683) 00:14:02.878 fused_ordering(684) 00:14:02.878 fused_ordering(685) 00:14:02.878 fused_ordering(686) 00:14:02.878 fused_ordering(687) 00:14:02.878 fused_ordering(688) 00:14:02.878 fused_ordering(689) 00:14:02.878 fused_ordering(690) 00:14:02.878 fused_ordering(691) 00:14:02.878 fused_ordering(692) 00:14:02.878 fused_ordering(693) 00:14:02.878 fused_ordering(694) 00:14:02.878 fused_ordering(695) 00:14:02.878 fused_ordering(696) 00:14:02.878 fused_ordering(697) 00:14:02.878 fused_ordering(698) 00:14:02.878 fused_ordering(699) 00:14:02.878 fused_ordering(700) 00:14:02.878 fused_ordering(701) 00:14:02.878 fused_ordering(702) 00:14:02.878 fused_ordering(703) 00:14:02.878 fused_ordering(704) 00:14:02.878 fused_ordering(705) 00:14:02.878 fused_ordering(706) 00:14:02.878 fused_ordering(707) 00:14:02.878 fused_ordering(708) 00:14:02.878 fused_ordering(709) 00:14:02.878 fused_ordering(710) 00:14:02.878 fused_ordering(711) 00:14:02.878 fused_ordering(712) 00:14:02.878 fused_ordering(713) 00:14:02.878 fused_ordering(714) 00:14:02.878 fused_ordering(715) 00:14:02.878 fused_ordering(716) 00:14:02.878 fused_ordering(717) 00:14:02.878 fused_ordering(718) 00:14:02.878 fused_ordering(719) 00:14:02.878 fused_ordering(720) 00:14:02.878 fused_ordering(721) 00:14:02.878 fused_ordering(722) 00:14:02.878 fused_ordering(723) 00:14:02.878 fused_ordering(724) 00:14:02.878 fused_ordering(725) 00:14:02.878 fused_ordering(726) 00:14:02.878 fused_ordering(727) 00:14:02.878 fused_ordering(728) 00:14:02.878 fused_ordering(729) 00:14:02.878 fused_ordering(730) 00:14:02.878 fused_ordering(731) 00:14:02.878 fused_ordering(732) 00:14:02.878 fused_ordering(733) 00:14:02.878 fused_ordering(734) 00:14:02.878 fused_ordering(735) 00:14:02.878 fused_ordering(736) 00:14:02.878 fused_ordering(737) 00:14:02.878 fused_ordering(738) 00:14:02.878 fused_ordering(739) 00:14:02.878 fused_ordering(740) 00:14:02.878 fused_ordering(741) 00:14:02.878 fused_ordering(742) 00:14:02.878 fused_ordering(743) 00:14:02.878 fused_ordering(744) 00:14:02.878 fused_ordering(745) 00:14:02.878 fused_ordering(746) 00:14:02.878 fused_ordering(747) 00:14:02.878 fused_ordering(748) 00:14:02.878 fused_ordering(749) 00:14:02.878 fused_ordering(750) 00:14:02.878 fused_ordering(751) 00:14:02.878 fused_ordering(752) 00:14:02.878 fused_ordering(753) 00:14:02.878 fused_ordering(754) 00:14:02.878 fused_ordering(755) 00:14:02.878 fused_ordering(756) 00:14:02.878 fused_ordering(757) 00:14:02.878 fused_ordering(758) 00:14:02.878 fused_ordering(759) 00:14:02.878 fused_ordering(760) 00:14:02.878 fused_ordering(761) 00:14:02.878 fused_ordering(762) 00:14:02.878 fused_ordering(763) 00:14:02.878 fused_ordering(764) 00:14:02.878 fused_ordering(765) 00:14:02.878 fused_ordering(766) 00:14:02.878 fused_ordering(767) 00:14:02.878 fused_ordering(768) 00:14:02.878 fused_ordering(769) 00:14:02.878 fused_ordering(770) 00:14:02.878 fused_ordering(771) 00:14:02.878 fused_ordering(772) 00:14:02.878 fused_ordering(773) 00:14:02.878 fused_ordering(774) 00:14:02.878 fused_ordering(775) 00:14:02.878 fused_ordering(776) 00:14:02.878 fused_ordering(777) 00:14:02.878 fused_ordering(778) 00:14:02.878 fused_ordering(779) 00:14:02.878 fused_ordering(780) 00:14:02.878 fused_ordering(781) 00:14:02.878 fused_ordering(782) 00:14:02.878 fused_ordering(783) 00:14:02.878 fused_ordering(784) 00:14:02.878 fused_ordering(785) 00:14:02.878 fused_ordering(786) 00:14:02.878 fused_ordering(787) 00:14:02.878 fused_ordering(788) 00:14:02.878 fused_ordering(789) 00:14:02.878 fused_ordering(790) 00:14:02.878 fused_ordering(791) 00:14:02.878 fused_ordering(792) 00:14:02.878 fused_ordering(793) 00:14:02.878 fused_ordering(794) 00:14:02.878 fused_ordering(795) 00:14:02.878 fused_ordering(796) 00:14:02.878 fused_ordering(797) 00:14:02.878 fused_ordering(798) 00:14:02.878 fused_ordering(799) 00:14:02.878 fused_ordering(800) 00:14:02.878 fused_ordering(801) 00:14:02.878 fused_ordering(802) 00:14:02.878 fused_ordering(803) 00:14:02.878 fused_ordering(804) 00:14:02.878 fused_ordering(805) 00:14:02.878 fused_ordering(806) 00:14:02.878 fused_ordering(807) 00:14:02.878 fused_ordering(808) 00:14:02.878 fused_ordering(809) 00:14:02.878 fused_ordering(810) 00:14:02.878 fused_ordering(811) 00:14:02.878 fused_ordering(812) 00:14:02.878 fused_ordering(813) 00:14:02.879 fused_ordering(814) 00:14:02.879 fused_ordering(815) 00:14:02.879 fused_ordering(816) 00:14:02.879 fused_ordering(817) 00:14:02.879 fused_ordering(818) 00:14:02.879 fused_ordering(819) 00:14:02.879 fused_ordering(820) 00:14:03.138 fused_ordering(821) 00:14:03.138 fused_ordering(822) 00:14:03.138 fused_ordering(823) 00:14:03.138 fused_ordering(824) 00:14:03.138 fused_ordering(825) 00:14:03.138 fused_ordering(826) 00:14:03.138 fused_ordering(827) 00:14:03.138 fused_ordering(828) 00:14:03.138 fused_ordering(829) 00:14:03.138 fused_ordering(830) 00:14:03.138 fused_ordering(831) 00:14:03.138 fused_ordering(832) 00:14:03.138 fused_ordering(833) 00:14:03.138 fused_ordering(834) 00:14:03.138 fused_ordering(835) 00:14:03.138 fused_ordering(836) 00:14:03.138 fused_ordering(837) 00:14:03.138 fused_ordering(838) 00:14:03.138 fused_ordering(839) 00:14:03.138 fused_ordering(840) 00:14:03.138 fused_ordering(841) 00:14:03.138 fused_ordering(842) 00:14:03.138 fused_ordering(843) 00:14:03.138 fused_ordering(844) 00:14:03.138 fused_ordering(845) 00:14:03.138 fused_ordering(846) 00:14:03.138 fused_ordering(847) 00:14:03.138 fused_ordering(848) 00:14:03.138 fused_ordering(849) 00:14:03.138 fused_ordering(850) 00:14:03.138 fused_ordering(851) 00:14:03.138 fused_ordering(852) 00:14:03.138 fused_ordering(853) 00:14:03.138 fused_ordering(854) 00:14:03.138 fused_ordering(855) 00:14:03.138 fused_ordering(856) 00:14:03.138 fused_ordering(857) 00:14:03.138 fused_ordering(858) 00:14:03.138 fused_ordering(859) 00:14:03.138 fused_ordering(860) 00:14:03.138 fused_ordering(861) 00:14:03.138 fused_ordering(862) 00:14:03.138 fused_ordering(863) 00:14:03.138 fused_ordering(864) 00:14:03.138 fused_ordering(865) 00:14:03.138 fused_ordering(866) 00:14:03.138 fused_ordering(867) 00:14:03.138 fused_ordering(868) 00:14:03.138 fused_ordering(869) 00:14:03.138 fused_ordering(870) 00:14:03.138 fused_ordering(871) 00:14:03.138 fused_ordering(872) 00:14:03.138 fused_ordering(873) 00:14:03.138 fused_ordering(874) 00:14:03.138 fused_ordering(875) 00:14:03.138 fused_ordering(876) 00:14:03.138 fused_ordering(877) 00:14:03.138 fused_ordering(878) 00:14:03.138 fused_ordering(879) 00:14:03.138 fused_ordering(880) 00:14:03.138 fused_ordering(881) 00:14:03.138 fused_ordering(882) 00:14:03.138 fused_ordering(883) 00:14:03.138 fused_ordering(884) 00:14:03.138 fused_ordering(885) 00:14:03.138 fused_ordering(886) 00:14:03.138 fused_ordering(887) 00:14:03.138 fused_ordering(888) 00:14:03.138 fused_ordering(889) 00:14:03.138 fused_ordering(890) 00:14:03.138 fused_ordering(891) 00:14:03.138 fused_ordering(892) 00:14:03.138 fused_ordering(893) 00:14:03.138 fused_ordering(894) 00:14:03.138 fused_ordering(895) 00:14:03.138 fused_ordering(896) 00:14:03.138 fused_ordering(897) 00:14:03.138 fused_ordering(898) 00:14:03.138 fused_ordering(899) 00:14:03.138 fused_ordering(900) 00:14:03.138 fused_ordering(901) 00:14:03.138 fused_ordering(902) 00:14:03.138 fused_ordering(903) 00:14:03.138 fused_ordering(904) 00:14:03.138 fused_ordering(905) 00:14:03.138 fused_ordering(906) 00:14:03.138 fused_ordering(907) 00:14:03.138 fused_ordering(908) 00:14:03.138 fused_ordering(909) 00:14:03.138 fused_ordering(910) 00:14:03.138 fused_ordering(911) 00:14:03.138 fused_ordering(912) 00:14:03.138 fused_ordering(913) 00:14:03.138 fused_ordering(914) 00:14:03.138 fused_ordering(915) 00:14:03.138 fused_ordering(916) 00:14:03.138 fused_ordering(917) 00:14:03.138 fused_ordering(918) 00:14:03.138 fused_ordering(919) 00:14:03.138 fused_ordering(920) 00:14:03.138 fused_ordering(921) 00:14:03.138 fused_ordering(922) 00:14:03.138 fused_ordering(923) 00:14:03.138 fused_ordering(924) 00:14:03.138 fused_ordering(925) 00:14:03.138 fused_ordering(926) 00:14:03.138 fused_ordering(927) 00:14:03.138 fused_ordering(928) 00:14:03.138 fused_ordering(929) 00:14:03.138 fused_ordering(930) 00:14:03.138 fused_ordering(931) 00:14:03.138 fused_ordering(932) 00:14:03.138 fused_ordering(933) 00:14:03.138 fused_ordering(934) 00:14:03.138 fused_ordering(935) 00:14:03.138 fused_ordering(936) 00:14:03.138 fused_ordering(937) 00:14:03.138 fused_ordering(938) 00:14:03.138 fused_ordering(939) 00:14:03.138 fused_ordering(940) 00:14:03.138 fused_ordering(941) 00:14:03.138 fused_ordering(942) 00:14:03.138 fused_ordering(943) 00:14:03.138 fused_ordering(944) 00:14:03.138 fused_ordering(945) 00:14:03.138 fused_ordering(946) 00:14:03.138 fused_ordering(947) 00:14:03.138 fused_ordering(948) 00:14:03.138 fused_ordering(949) 00:14:03.138 fused_ordering(950) 00:14:03.138 fused_ordering(951) 00:14:03.138 fused_ordering(952) 00:14:03.138 fused_ordering(953) 00:14:03.138 fused_ordering(954) 00:14:03.138 fused_ordering(955) 00:14:03.138 fused_ordering(956) 00:14:03.138 fused_ordering(957) 00:14:03.138 fused_ordering(958) 00:14:03.138 fused_ordering(959) 00:14:03.138 fused_ordering(960) 00:14:03.138 fused_ordering(961) 00:14:03.138 fused_ordering(962) 00:14:03.138 fused_ordering(963) 00:14:03.138 fused_ordering(964) 00:14:03.138 fused_ordering(965) 00:14:03.138 fused_ordering(966) 00:14:03.138 fused_ordering(967) 00:14:03.138 fused_ordering(968) 00:14:03.138 fused_ordering(969) 00:14:03.138 fused_ordering(970) 00:14:03.138 fused_ordering(971) 00:14:03.138 fused_ordering(972) 00:14:03.138 fused_ordering(973) 00:14:03.138 fused_ordering(974) 00:14:03.138 fused_ordering(975) 00:14:03.138 fused_ordering(976) 00:14:03.138 fused_ordering(977) 00:14:03.138 fused_ordering(978) 00:14:03.138 fused_ordering(979) 00:14:03.138 fused_ordering(980) 00:14:03.138 fused_ordering(981) 00:14:03.138 fused_ordering(982) 00:14:03.138 fused_ordering(983) 00:14:03.138 fused_ordering(984) 00:14:03.138 fused_ordering(985) 00:14:03.138 fused_ordering(986) 00:14:03.138 fused_ordering(987) 00:14:03.138 fused_ordering(988) 00:14:03.138 fused_ordering(989) 00:14:03.138 fused_ordering(990) 00:14:03.138 fused_ordering(991) 00:14:03.138 fused_ordering(992) 00:14:03.138 fused_ordering(993) 00:14:03.138 fused_ordering(994) 00:14:03.138 fused_ordering(995) 00:14:03.138 fused_ordering(996) 00:14:03.138 fused_ordering(997) 00:14:03.138 fused_ordering(998) 00:14:03.138 fused_ordering(999) 00:14:03.138 fused_ordering(1000) 00:14:03.138 fused_ordering(1001) 00:14:03.138 fused_ordering(1002) 00:14:03.138 fused_ordering(1003) 00:14:03.138 fused_ordering(1004) 00:14:03.138 fused_ordering(1005) 00:14:03.138 fused_ordering(1006) 00:14:03.138 fused_ordering(1007) 00:14:03.138 fused_ordering(1008) 00:14:03.138 fused_ordering(1009) 00:14:03.138 fused_ordering(1010) 00:14:03.138 fused_ordering(1011) 00:14:03.138 fused_ordering(1012) 00:14:03.138 fused_ordering(1013) 00:14:03.138 fused_ordering(1014) 00:14:03.138 fused_ordering(1015) 00:14:03.138 fused_ordering(1016) 00:14:03.138 fused_ordering(1017) 00:14:03.138 fused_ordering(1018) 00:14:03.138 fused_ordering(1019) 00:14:03.138 fused_ordering(1020) 00:14:03.138 fused_ordering(1021) 00:14:03.139 fused_ordering(1022) 00:14:03.139 fused_ordering(1023) 00:14:03.139 13:28:08 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:03.139 13:28:08 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:03.139 13:28:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:03.139 13:28:08 -- nvmf/common.sh@116 -- # sync 00:14:03.397 13:28:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:03.397 13:28:08 -- nvmf/common.sh@119 -- # set +e 00:14:03.397 13:28:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:03.397 13:28:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:03.397 rmmod nvme_tcp 00:14:03.397 rmmod nvme_fabrics 00:14:03.397 rmmod nvme_keyring 00:14:03.397 13:28:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:03.397 13:28:08 -- nvmf/common.sh@123 -- # set -e 00:14:03.397 13:28:08 -- nvmf/common.sh@124 -- # return 0 00:14:03.397 13:28:08 -- nvmf/common.sh@477 -- # '[' -n 82167 ']' 00:14:03.397 13:28:08 -- nvmf/common.sh@478 -- # killprocess 82167 00:14:03.398 13:28:08 -- common/autotest_common.sh@936 -- # '[' -z 82167 ']' 00:14:03.398 13:28:08 -- common/autotest_common.sh@940 -- # kill -0 82167 00:14:03.398 13:28:08 -- common/autotest_common.sh@941 -- # uname 00:14:03.398 13:28:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:03.398 13:28:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82167 00:14:03.398 13:28:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:03.398 killing process with pid 82167 00:14:03.398 13:28:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:03.398 13:28:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82167' 00:14:03.398 13:28:08 -- common/autotest_common.sh@955 -- # kill 82167 00:14:03.398 13:28:08 -- common/autotest_common.sh@960 -- # wait 82167 00:14:03.656 13:28:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:03.657 13:28:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:03.657 13:28:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:03.657 13:28:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:03.657 13:28:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:03.657 13:28:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.657 13:28:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:03.657 13:28:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.657 13:28:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:03.657 00:14:03.657 real 0m3.802s 00:14:03.657 user 0m4.499s 00:14:03.657 sys 0m1.220s 00:14:03.657 13:28:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:03.657 13:28:09 -- common/autotest_common.sh@10 -- # set +x 00:14:03.657 ************************************ 00:14:03.657 END TEST nvmf_fused_ordering 00:14:03.657 ************************************ 00:14:03.657 13:28:09 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:03.657 13:28:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:03.657 13:28:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:03.657 13:28:09 -- common/autotest_common.sh@10 -- # set +x 00:14:03.657 ************************************ 00:14:03.657 START TEST nvmf_delete_subsystem 00:14:03.657 ************************************ 00:14:03.657 13:28:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:03.657 * Looking for test storage... 00:14:03.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:03.657 13:28:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:03.657 13:28:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:03.657 13:28:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:03.916 13:28:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:03.916 13:28:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:03.916 13:28:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:03.916 13:28:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:03.916 13:28:09 -- scripts/common.sh@335 -- # IFS=.-: 00:14:03.916 13:28:09 -- scripts/common.sh@335 -- # read -ra ver1 00:14:03.916 13:28:09 -- scripts/common.sh@336 -- # IFS=.-: 00:14:03.916 13:28:09 -- scripts/common.sh@336 -- # read -ra ver2 00:14:03.916 13:28:09 -- scripts/common.sh@337 -- # local 'op=<' 00:14:03.916 13:28:09 -- scripts/common.sh@339 -- # ver1_l=2 00:14:03.916 13:28:09 -- scripts/common.sh@340 -- # ver2_l=1 00:14:03.916 13:28:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:03.916 13:28:09 -- scripts/common.sh@343 -- # case "$op" in 00:14:03.916 13:28:09 -- scripts/common.sh@344 -- # : 1 00:14:03.916 13:28:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:03.916 13:28:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:03.916 13:28:09 -- scripts/common.sh@364 -- # decimal 1 00:14:03.916 13:28:09 -- scripts/common.sh@352 -- # local d=1 00:14:03.916 13:28:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:03.916 13:28:09 -- scripts/common.sh@354 -- # echo 1 00:14:03.916 13:28:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:03.916 13:28:09 -- scripts/common.sh@365 -- # decimal 2 00:14:03.916 13:28:09 -- scripts/common.sh@352 -- # local d=2 00:14:03.916 13:28:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:03.916 13:28:09 -- scripts/common.sh@354 -- # echo 2 00:14:03.916 13:28:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:03.916 13:28:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:03.916 13:28:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:03.916 13:28:09 -- scripts/common.sh@367 -- # return 0 00:14:03.916 13:28:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:03.916 13:28:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:03.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.916 --rc genhtml_branch_coverage=1 00:14:03.916 --rc genhtml_function_coverage=1 00:14:03.916 --rc genhtml_legend=1 00:14:03.916 --rc geninfo_all_blocks=1 00:14:03.916 --rc geninfo_unexecuted_blocks=1 00:14:03.916 00:14:03.916 ' 00:14:03.916 13:28:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:03.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.916 --rc genhtml_branch_coverage=1 00:14:03.916 --rc genhtml_function_coverage=1 00:14:03.916 --rc genhtml_legend=1 00:14:03.916 --rc geninfo_all_blocks=1 00:14:03.916 --rc geninfo_unexecuted_blocks=1 00:14:03.916 00:14:03.916 ' 00:14:03.916 13:28:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:03.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.916 --rc genhtml_branch_coverage=1 00:14:03.916 --rc genhtml_function_coverage=1 00:14:03.916 --rc genhtml_legend=1 00:14:03.916 --rc geninfo_all_blocks=1 00:14:03.916 --rc geninfo_unexecuted_blocks=1 00:14:03.916 00:14:03.916 ' 00:14:03.916 13:28:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:03.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.916 --rc genhtml_branch_coverage=1 00:14:03.916 --rc genhtml_function_coverage=1 00:14:03.916 --rc genhtml_legend=1 00:14:03.916 --rc geninfo_all_blocks=1 00:14:03.916 --rc geninfo_unexecuted_blocks=1 00:14:03.916 00:14:03.916 ' 00:14:03.917 13:28:09 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:03.917 13:28:09 -- nvmf/common.sh@7 -- # uname -s 00:14:03.917 13:28:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:03.917 13:28:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:03.917 13:28:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:03.917 13:28:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:03.917 13:28:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:03.917 13:28:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:03.917 13:28:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:03.917 13:28:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:03.917 13:28:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:03.917 13:28:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:03.917 13:28:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:14:03.917 13:28:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:14:03.917 13:28:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:03.917 13:28:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:03.917 13:28:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:03.917 13:28:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:03.917 13:28:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:03.917 13:28:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:03.917 13:28:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:03.917 13:28:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.917 13:28:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.917 13:28:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.917 13:28:09 -- paths/export.sh@5 -- # export PATH 00:14:03.917 13:28:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.917 13:28:09 -- nvmf/common.sh@46 -- # : 0 00:14:03.917 13:28:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:03.917 13:28:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:03.917 13:28:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:03.917 13:28:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:03.917 13:28:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:03.917 13:28:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:03.917 13:28:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:03.917 13:28:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:03.917 13:28:09 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:03.917 13:28:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:03.917 13:28:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:03.917 13:28:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:03.917 13:28:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:03.917 13:28:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:03.917 13:28:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.917 13:28:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:03.917 13:28:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.917 13:28:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:03.917 13:28:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:03.917 13:28:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:03.917 13:28:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:03.917 13:28:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:03.917 13:28:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:03.917 13:28:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:03.917 13:28:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:03.917 13:28:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:03.917 13:28:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:03.917 13:28:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:03.917 13:28:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:03.917 13:28:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:03.917 13:28:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:03.917 13:28:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:03.917 13:28:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:03.917 13:28:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:03.917 13:28:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:03.917 13:28:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:03.917 13:28:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:03.917 Cannot find device "nvmf_tgt_br" 00:14:03.917 13:28:09 -- nvmf/common.sh@154 -- # true 00:14:03.917 13:28:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:03.917 Cannot find device "nvmf_tgt_br2" 00:14:03.917 13:28:09 -- nvmf/common.sh@155 -- # true 00:14:03.917 13:28:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:03.917 13:28:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:03.917 Cannot find device "nvmf_tgt_br" 00:14:03.917 13:28:09 -- nvmf/common.sh@157 -- # true 00:14:03.917 13:28:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:03.917 Cannot find device "nvmf_tgt_br2" 00:14:03.917 13:28:09 -- nvmf/common.sh@158 -- # true 00:14:03.917 13:28:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:03.917 13:28:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:03.917 13:28:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:03.917 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:03.917 13:28:09 -- nvmf/common.sh@161 -- # true 00:14:03.917 13:28:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:03.917 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:03.917 13:28:09 -- nvmf/common.sh@162 -- # true 00:14:03.917 13:28:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:03.917 13:28:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:03.917 13:28:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:03.917 13:28:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:03.917 13:28:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:03.917 13:28:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:04.175 13:28:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:04.175 13:28:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:04.175 13:28:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:04.175 13:28:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:04.175 13:28:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:04.175 13:28:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:04.175 13:28:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:04.175 13:28:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:04.175 13:28:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:04.175 13:28:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:04.175 13:28:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:04.175 13:28:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:04.175 13:28:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:04.175 13:28:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:04.175 13:28:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:04.175 13:28:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:04.175 13:28:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:04.175 13:28:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:04.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:04.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:14:04.175 00:14:04.175 --- 10.0.0.2 ping statistics --- 00:14:04.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.175 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:14:04.175 13:28:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:04.175 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:04.175 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:14:04.175 00:14:04.175 --- 10.0.0.3 ping statistics --- 00:14:04.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.176 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:14:04.176 13:28:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:04.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:04.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:14:04.176 00:14:04.176 --- 10.0.0.1 ping statistics --- 00:14:04.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.176 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:14:04.176 13:28:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:04.176 13:28:09 -- nvmf/common.sh@421 -- # return 0 00:14:04.176 13:28:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:04.176 13:28:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:04.176 13:28:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:04.176 13:28:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:04.176 13:28:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:04.176 13:28:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:04.176 13:28:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:04.176 13:28:09 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:04.176 13:28:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:04.176 13:28:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:04.176 13:28:09 -- common/autotest_common.sh@10 -- # set +x 00:14:04.176 13:28:09 -- nvmf/common.sh@469 -- # nvmfpid=82427 00:14:04.176 13:28:09 -- nvmf/common.sh@470 -- # waitforlisten 82427 00:14:04.176 13:28:09 -- common/autotest_common.sh@829 -- # '[' -z 82427 ']' 00:14:04.176 13:28:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:04.176 13:28:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.176 13:28:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:04.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.176 13:28:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.176 13:28:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:04.176 13:28:09 -- common/autotest_common.sh@10 -- # set +x 00:14:04.176 [2024-12-15 13:28:09.816090] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:04.176 [2024-12-15 13:28:09.816177] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.434 [2024-12-15 13:28:09.953539] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:04.434 [2024-12-15 13:28:10.024394] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:04.434 [2024-12-15 13:28:10.024529] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:04.434 [2024-12-15 13:28:10.024542] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:04.434 [2024-12-15 13:28:10.024551] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:04.434 [2024-12-15 13:28:10.024958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.434 [2024-12-15 13:28:10.024996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.370 13:28:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:05.370 13:28:10 -- common/autotest_common.sh@862 -- # return 0 00:14:05.370 13:28:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:05.370 13:28:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:05.370 13:28:10 -- common/autotest_common.sh@10 -- # set +x 00:14:05.370 13:28:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:05.370 13:28:10 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:05.370 13:28:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.370 13:28:10 -- common/autotest_common.sh@10 -- # set +x 00:14:05.370 [2024-12-15 13:28:10.870480] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:05.370 13:28:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.370 13:28:10 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:05.370 13:28:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.370 13:28:10 -- common/autotest_common.sh@10 -- # set +x 00:14:05.370 13:28:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.370 13:28:10 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:05.370 13:28:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.370 13:28:10 -- common/autotest_common.sh@10 -- # set +x 00:14:05.370 [2024-12-15 13:28:10.886649] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:05.370 13:28:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.370 13:28:10 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:05.370 13:28:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.370 13:28:10 -- common/autotest_common.sh@10 -- # set +x 00:14:05.370 NULL1 00:14:05.370 13:28:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.371 13:28:10 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:05.371 13:28:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.371 13:28:10 -- common/autotest_common.sh@10 -- # set +x 00:14:05.371 Delay0 00:14:05.371 13:28:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.371 13:28:10 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.371 13:28:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.371 13:28:10 -- common/autotest_common.sh@10 -- # set +x 00:14:05.371 13:28:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.371 13:28:10 -- target/delete_subsystem.sh@28 -- # perf_pid=82478 00:14:05.371 13:28:10 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:05.371 13:28:10 -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:05.629 [2024-12-15 13:28:11.071127] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:07.533 13:28:12 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:07.533 13:28:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.533 13:28:12 -- common/autotest_common.sh@10 -- # set +x 00:14:07.533 Write completed with error (sct=0, sc=8) 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 Write completed with error (sct=0, sc=8) 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 starting I/O failed: -6 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 Write completed with error (sct=0, sc=8) 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 starting I/O failed: -6 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 starting I/O failed: -6 00:14:07.533 Write completed with error (sct=0, sc=8) 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 Write completed with error (sct=0, sc=8) 00:14:07.533 starting I/O failed: -6 00:14:07.533 Write completed with error (sct=0, sc=8) 00:14:07.533 Write completed with error (sct=0, sc=8) 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 starting I/O failed: -6 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 Write completed with error (sct=0, sc=8) 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 starting I/O failed: -6 00:14:07.533 Write completed with error (sct=0, sc=8) 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 Write completed with error (sct=0, sc=8) 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 starting I/O failed: -6 00:14:07.533 Write completed with error (sct=0, sc=8) 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 Write completed with error (sct=0, sc=8) 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 starting I/O failed: -6 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 Write completed with error (sct=0, sc=8) 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 Write completed with error (sct=0, sc=8) 00:14:07.533 starting I/O failed: -6 00:14:07.533 Write completed with error (sct=0, sc=8) 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 Write completed with error (sct=0, sc=8) 00:14:07.533 starting I/O failed: -6 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 Write completed with error (sct=0, sc=8) 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 starting I/O failed: -6 00:14:07.533 starting I/O failed: -6 00:14:07.533 starting I/O failed: -6 00:14:07.533 Write completed with error (sct=0, sc=8) 00:14:07.533 Write completed with error (sct=0, sc=8) 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 Write completed with error (sct=0, sc=8) 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 Write completed with error (sct=0, sc=8) 00:14:07.533 Write completed with error (sct=0, sc=8) 00:14:07.533 Write completed with error (sct=0, sc=8) 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.533 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 [2024-12-15 13:28:13.105362] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x867870 is same with the state(5) to be set 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 starting I/O failed: -6 00:14:07.534 starting I/O failed: -6 00:14:07.534 starting I/O failed: -6 00:14:07.534 starting I/O failed: -6 00:14:07.534 starting I/O failed: -6 00:14:07.534 starting I/O failed: -6 00:14:07.534 starting I/O failed: -6 00:14:07.534 starting I/O failed: -6 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.534 Read completed with error (sct=0, sc=8) 00:14:07.534 Write completed with error (sct=0, sc=8) 00:14:07.534 starting I/O failed: -6 00:14:07.535 [2024-12-15 13:28:13.110893] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe634000c00 is same with the state(5) to be set 00:14:08.472 [2024-12-15 13:28:14.084716] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x866070 is same with the state(5) to be set 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 [2024-12-15 13:28:14.105176] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868120 is same with the state(5) to be set 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 [2024-12-15 13:28:14.105439] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x867bc0 is same with the state(5) to be set 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 [2024-12-15 13:28:14.107587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe63400bf20 is same with the state(5) to be set 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Read completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 Write completed with error (sct=0, sc=8) 00:14:08.472 [2024-12-15 13:28:14.108464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe63400c600 is same with the state(5) to be set 00:14:08.472 [2024-12-15 13:28:14.109401] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x866070 (9): Bad file descriptor 00:14:08.472 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:08.472 13:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.472 13:28:14 -- target/delete_subsystem.sh@34 -- # delay=0 00:14:08.472 13:28:14 -- target/delete_subsystem.sh@35 -- # kill -0 82478 00:14:08.472 13:28:14 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:08.472 Initializing NVMe Controllers 00:14:08.472 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:08.472 Controller IO queue size 128, less than required. 00:14:08.472 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:08.472 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:08.472 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:08.472 Initialization complete. Launching workers. 00:14:08.472 ======================================================== 00:14:08.472 Latency(us) 00:14:08.473 Device Information : IOPS MiB/s Average min max 00:14:08.473 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.57 0.08 901528.33 429.86 1044050.34 00:14:08.473 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 182.99 0.09 920927.96 1012.57 2001854.78 00:14:08.473 ======================================================== 00:14:08.473 Total : 351.55 0.17 911626.01 429.86 2001854.78 00:14:08.473 00:14:09.041 13:28:14 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:09.041 13:28:14 -- target/delete_subsystem.sh@35 -- # kill -0 82478 00:14:09.041 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (82478) - No such process 00:14:09.041 13:28:14 -- target/delete_subsystem.sh@45 -- # NOT wait 82478 00:14:09.041 13:28:14 -- common/autotest_common.sh@650 -- # local es=0 00:14:09.041 13:28:14 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 82478 00:14:09.041 13:28:14 -- common/autotest_common.sh@638 -- # local arg=wait 00:14:09.041 13:28:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:09.041 13:28:14 -- common/autotest_common.sh@642 -- # type -t wait 00:14:09.041 13:28:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:09.041 13:28:14 -- common/autotest_common.sh@653 -- # wait 82478 00:14:09.041 13:28:14 -- common/autotest_common.sh@653 -- # es=1 00:14:09.041 13:28:14 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:09.041 13:28:14 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:09.041 13:28:14 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:09.041 13:28:14 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:09.041 13:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.041 13:28:14 -- common/autotest_common.sh@10 -- # set +x 00:14:09.041 13:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.041 13:28:14 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:09.041 13:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.041 13:28:14 -- common/autotest_common.sh@10 -- # set +x 00:14:09.041 [2024-12-15 13:28:14.634565] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:09.041 13:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.041 13:28:14 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:09.041 13:28:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.041 13:28:14 -- common/autotest_common.sh@10 -- # set +x 00:14:09.041 13:28:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.041 13:28:14 -- target/delete_subsystem.sh@54 -- # perf_pid=82523 00:14:09.041 13:28:14 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:09.041 13:28:14 -- target/delete_subsystem.sh@56 -- # delay=0 00:14:09.041 13:28:14 -- target/delete_subsystem.sh@57 -- # kill -0 82523 00:14:09.041 13:28:14 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:09.299 [2024-12-15 13:28:14.803928] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:09.560 13:28:15 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:09.560 13:28:15 -- target/delete_subsystem.sh@57 -- # kill -0 82523 00:14:09.560 13:28:15 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:10.127 13:28:15 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:10.127 13:28:15 -- target/delete_subsystem.sh@57 -- # kill -0 82523 00:14:10.127 13:28:15 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:10.694 13:28:16 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:10.694 13:28:16 -- target/delete_subsystem.sh@57 -- # kill -0 82523 00:14:10.694 13:28:16 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:11.262 13:28:16 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:11.262 13:28:16 -- target/delete_subsystem.sh@57 -- # kill -0 82523 00:14:11.262 13:28:16 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:11.520 13:28:17 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:11.520 13:28:17 -- target/delete_subsystem.sh@57 -- # kill -0 82523 00:14:11.520 13:28:17 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:12.088 13:28:17 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:12.088 13:28:17 -- target/delete_subsystem.sh@57 -- # kill -0 82523 00:14:12.088 13:28:17 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:12.347 Initializing NVMe Controllers 00:14:12.347 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:12.347 Controller IO queue size 128, less than required. 00:14:12.347 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:12.347 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:12.347 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:12.347 Initialization complete. Launching workers. 00:14:12.347 ======================================================== 00:14:12.347 Latency(us) 00:14:12.347 Device Information : IOPS MiB/s Average min max 00:14:12.347 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002439.01 1000134.69 1009511.21 00:14:12.347 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003859.93 1000119.33 1010684.78 00:14:12.347 ======================================================== 00:14:12.347 Total : 256.00 0.12 1003149.47 1000119.33 1010684.78 00:14:12.347 00:14:12.607 13:28:18 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:12.607 13:28:18 -- target/delete_subsystem.sh@57 -- # kill -0 82523 00:14:12.607 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (82523) - No such process 00:14:12.607 13:28:18 -- target/delete_subsystem.sh@67 -- # wait 82523 00:14:12.607 13:28:18 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:12.607 13:28:18 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:12.607 13:28:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:12.607 13:28:18 -- nvmf/common.sh@116 -- # sync 00:14:12.607 13:28:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:12.607 13:28:18 -- nvmf/common.sh@119 -- # set +e 00:14:12.607 13:28:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:12.607 13:28:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:12.607 rmmod nvme_tcp 00:14:12.607 rmmod nvme_fabrics 00:14:12.607 rmmod nvme_keyring 00:14:12.607 13:28:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:12.607 13:28:18 -- nvmf/common.sh@123 -- # set -e 00:14:12.607 13:28:18 -- nvmf/common.sh@124 -- # return 0 00:14:12.607 13:28:18 -- nvmf/common.sh@477 -- # '[' -n 82427 ']' 00:14:12.607 13:28:18 -- nvmf/common.sh@478 -- # killprocess 82427 00:14:12.607 13:28:18 -- common/autotest_common.sh@936 -- # '[' -z 82427 ']' 00:14:12.866 13:28:18 -- common/autotest_common.sh@940 -- # kill -0 82427 00:14:12.866 13:28:18 -- common/autotest_common.sh@941 -- # uname 00:14:12.866 13:28:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:12.866 13:28:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82427 00:14:12.866 13:28:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:12.866 killing process with pid 82427 00:14:12.866 13:28:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:12.866 13:28:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82427' 00:14:12.866 13:28:18 -- common/autotest_common.sh@955 -- # kill 82427 00:14:12.866 13:28:18 -- common/autotest_common.sh@960 -- # wait 82427 00:14:12.866 13:28:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:12.866 13:28:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:12.866 13:28:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:12.866 13:28:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:12.866 13:28:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:12.866 13:28:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.866 13:28:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.866 13:28:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.866 13:28:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:12.866 00:14:12.866 real 0m9.329s 00:14:12.866 user 0m28.793s 00:14:12.866 sys 0m1.487s 00:14:12.866 13:28:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:12.866 13:28:18 -- common/autotest_common.sh@10 -- # set +x 00:14:12.866 ************************************ 00:14:12.866 END TEST nvmf_delete_subsystem 00:14:12.866 ************************************ 00:14:13.125 13:28:18 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:14:13.125 13:28:18 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:14:13.125 13:28:18 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:13.125 13:28:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:13.125 13:28:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:13.125 13:28:18 -- common/autotest_common.sh@10 -- # set +x 00:14:13.125 ************************************ 00:14:13.125 START TEST nvmf_host_management 00:14:13.125 ************************************ 00:14:13.125 13:28:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:13.125 * Looking for test storage... 00:14:13.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:13.125 13:28:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:13.125 13:28:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:13.125 13:28:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:13.125 13:28:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:13.125 13:28:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:13.125 13:28:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:13.125 13:28:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:13.125 13:28:18 -- scripts/common.sh@335 -- # IFS=.-: 00:14:13.125 13:28:18 -- scripts/common.sh@335 -- # read -ra ver1 00:14:13.125 13:28:18 -- scripts/common.sh@336 -- # IFS=.-: 00:14:13.125 13:28:18 -- scripts/common.sh@336 -- # read -ra ver2 00:14:13.125 13:28:18 -- scripts/common.sh@337 -- # local 'op=<' 00:14:13.125 13:28:18 -- scripts/common.sh@339 -- # ver1_l=2 00:14:13.125 13:28:18 -- scripts/common.sh@340 -- # ver2_l=1 00:14:13.125 13:28:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:13.125 13:28:18 -- scripts/common.sh@343 -- # case "$op" in 00:14:13.125 13:28:18 -- scripts/common.sh@344 -- # : 1 00:14:13.125 13:28:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:13.125 13:28:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:13.125 13:28:18 -- scripts/common.sh@364 -- # decimal 1 00:14:13.125 13:28:18 -- scripts/common.sh@352 -- # local d=1 00:14:13.125 13:28:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:13.125 13:28:18 -- scripts/common.sh@354 -- # echo 1 00:14:13.125 13:28:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:13.125 13:28:18 -- scripts/common.sh@365 -- # decimal 2 00:14:13.125 13:28:18 -- scripts/common.sh@352 -- # local d=2 00:14:13.125 13:28:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:13.125 13:28:18 -- scripts/common.sh@354 -- # echo 2 00:14:13.125 13:28:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:13.125 13:28:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:13.125 13:28:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:13.125 13:28:18 -- scripts/common.sh@367 -- # return 0 00:14:13.125 13:28:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:13.125 13:28:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:13.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.125 --rc genhtml_branch_coverage=1 00:14:13.125 --rc genhtml_function_coverage=1 00:14:13.125 --rc genhtml_legend=1 00:14:13.125 --rc geninfo_all_blocks=1 00:14:13.125 --rc geninfo_unexecuted_blocks=1 00:14:13.125 00:14:13.126 ' 00:14:13.126 13:28:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:13.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.126 --rc genhtml_branch_coverage=1 00:14:13.126 --rc genhtml_function_coverage=1 00:14:13.126 --rc genhtml_legend=1 00:14:13.126 --rc geninfo_all_blocks=1 00:14:13.126 --rc geninfo_unexecuted_blocks=1 00:14:13.126 00:14:13.126 ' 00:14:13.126 13:28:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:13.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.126 --rc genhtml_branch_coverage=1 00:14:13.126 --rc genhtml_function_coverage=1 00:14:13.126 --rc genhtml_legend=1 00:14:13.126 --rc geninfo_all_blocks=1 00:14:13.126 --rc geninfo_unexecuted_blocks=1 00:14:13.126 00:14:13.126 ' 00:14:13.126 13:28:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:13.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.126 --rc genhtml_branch_coverage=1 00:14:13.126 --rc genhtml_function_coverage=1 00:14:13.126 --rc genhtml_legend=1 00:14:13.126 --rc geninfo_all_blocks=1 00:14:13.126 --rc geninfo_unexecuted_blocks=1 00:14:13.126 00:14:13.126 ' 00:14:13.126 13:28:18 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:13.126 13:28:18 -- nvmf/common.sh@7 -- # uname -s 00:14:13.126 13:28:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:13.126 13:28:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:13.126 13:28:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:13.126 13:28:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:13.126 13:28:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:13.126 13:28:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:13.126 13:28:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:13.126 13:28:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:13.126 13:28:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:13.126 13:28:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:13.126 13:28:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:14:13.126 13:28:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:14:13.126 13:28:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:13.126 13:28:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:13.126 13:28:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:13.126 13:28:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:13.126 13:28:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.126 13:28:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.126 13:28:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.126 13:28:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.126 13:28:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.126 13:28:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.126 13:28:18 -- paths/export.sh@5 -- # export PATH 00:14:13.126 13:28:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.126 13:28:18 -- nvmf/common.sh@46 -- # : 0 00:14:13.126 13:28:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:13.126 13:28:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:13.126 13:28:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:13.126 13:28:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:13.126 13:28:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:13.126 13:28:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:13.126 13:28:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:13.126 13:28:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:13.126 13:28:18 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:13.126 13:28:18 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:13.126 13:28:18 -- target/host_management.sh@104 -- # nvmftestinit 00:14:13.126 13:28:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:13.126 13:28:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:13.126 13:28:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:13.126 13:28:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:13.126 13:28:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:13.126 13:28:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.126 13:28:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.126 13:28:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.126 13:28:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:13.126 13:28:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:13.126 13:28:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:13.126 13:28:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:13.126 13:28:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:13.126 13:28:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:13.126 13:28:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:13.126 13:28:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:13.126 13:28:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:13.126 13:28:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:13.126 13:28:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:13.126 13:28:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:13.126 13:28:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:13.126 13:28:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:13.126 13:28:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:13.126 13:28:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:13.126 13:28:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:13.126 13:28:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:13.126 13:28:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:13.126 13:28:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:13.386 Cannot find device "nvmf_tgt_br" 00:14:13.386 13:28:18 -- nvmf/common.sh@154 -- # true 00:14:13.386 13:28:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:13.386 Cannot find device "nvmf_tgt_br2" 00:14:13.386 13:28:18 -- nvmf/common.sh@155 -- # true 00:14:13.386 13:28:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:13.386 13:28:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:13.386 Cannot find device "nvmf_tgt_br" 00:14:13.386 13:28:18 -- nvmf/common.sh@157 -- # true 00:14:13.386 13:28:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:13.386 Cannot find device "nvmf_tgt_br2" 00:14:13.386 13:28:18 -- nvmf/common.sh@158 -- # true 00:14:13.386 13:28:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:13.386 13:28:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:13.386 13:28:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:13.386 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:13.386 13:28:18 -- nvmf/common.sh@161 -- # true 00:14:13.386 13:28:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:13.386 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:13.386 13:28:18 -- nvmf/common.sh@162 -- # true 00:14:13.386 13:28:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:13.386 13:28:18 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:13.386 13:28:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:13.386 13:28:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:13.386 13:28:18 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:13.386 13:28:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:13.386 13:28:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:13.386 13:28:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:13.386 13:28:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:13.386 13:28:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:13.386 13:28:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:13.386 13:28:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:13.386 13:28:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:13.386 13:28:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:13.386 13:28:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:13.386 13:28:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:13.386 13:28:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:13.386 13:28:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:13.386 13:28:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:13.645 13:28:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:13.645 13:28:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:13.645 13:28:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:13.645 13:28:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:13.645 13:28:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:13.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:13.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:14:13.645 00:14:13.645 --- 10.0.0.2 ping statistics --- 00:14:13.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.645 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:14:13.645 13:28:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:13.645 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:13.645 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:14:13.645 00:14:13.645 --- 10.0.0.3 ping statistics --- 00:14:13.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.645 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:14:13.645 13:28:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:13.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:13.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:14:13.645 00:14:13.645 --- 10.0.0.1 ping statistics --- 00:14:13.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.645 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:14:13.645 13:28:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:13.645 13:28:19 -- nvmf/common.sh@421 -- # return 0 00:14:13.645 13:28:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:13.645 13:28:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:13.645 13:28:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:13.645 13:28:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:13.645 13:28:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:13.645 13:28:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:13.645 13:28:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:13.645 13:28:19 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:14:13.645 13:28:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:13.645 13:28:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:13.645 13:28:19 -- common/autotest_common.sh@10 -- # set +x 00:14:13.645 ************************************ 00:14:13.645 START TEST nvmf_host_management 00:14:13.645 ************************************ 00:14:13.645 13:28:19 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:14:13.645 13:28:19 -- target/host_management.sh@69 -- # starttarget 00:14:13.645 13:28:19 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:13.645 13:28:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:13.645 13:28:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:13.645 13:28:19 -- common/autotest_common.sh@10 -- # set +x 00:14:13.645 13:28:19 -- nvmf/common.sh@469 -- # nvmfpid=82767 00:14:13.645 13:28:19 -- nvmf/common.sh@470 -- # waitforlisten 82767 00:14:13.645 13:28:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:13.645 13:28:19 -- common/autotest_common.sh@829 -- # '[' -z 82767 ']' 00:14:13.645 13:28:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.645 13:28:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:13.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.645 13:28:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.645 13:28:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:13.645 13:28:19 -- common/autotest_common.sh@10 -- # set +x 00:14:13.645 [2024-12-15 13:28:19.232300] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:13.645 [2024-12-15 13:28:19.232383] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.904 [2024-12-15 13:28:19.376963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:13.904 [2024-12-15 13:28:19.442116] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:13.904 [2024-12-15 13:28:19.442287] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:13.904 [2024-12-15 13:28:19.442304] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:13.904 [2024-12-15 13:28:19.442315] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:13.904 [2024-12-15 13:28:19.442476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:13.904 [2024-12-15 13:28:19.443105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:13.904 [2024-12-15 13:28:19.443242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:13.904 [2024-12-15 13:28:19.443410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.841 13:28:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:14.841 13:28:20 -- common/autotest_common.sh@862 -- # return 0 00:14:14.841 13:28:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:14.841 13:28:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:14.841 13:28:20 -- common/autotest_common.sh@10 -- # set +x 00:14:14.841 13:28:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.841 13:28:20 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:14.841 13:28:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.841 13:28:20 -- common/autotest_common.sh@10 -- # set +x 00:14:14.841 [2024-12-15 13:28:20.301648] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:14.841 13:28:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.841 13:28:20 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:14.841 13:28:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:14.841 13:28:20 -- common/autotest_common.sh@10 -- # set +x 00:14:14.841 13:28:20 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:14.841 13:28:20 -- target/host_management.sh@23 -- # cat 00:14:14.841 13:28:20 -- target/host_management.sh@30 -- # rpc_cmd 00:14:14.841 13:28:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.841 13:28:20 -- common/autotest_common.sh@10 -- # set +x 00:14:14.841 Malloc0 00:14:14.841 [2024-12-15 13:28:20.378298] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.841 13:28:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.841 13:28:20 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:14.841 13:28:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:14.841 13:28:20 -- common/autotest_common.sh@10 -- # set +x 00:14:14.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:14.841 13:28:20 -- target/host_management.sh@73 -- # perfpid=82839 00:14:14.841 13:28:20 -- target/host_management.sh@74 -- # waitforlisten 82839 /var/tmp/bdevperf.sock 00:14:14.841 13:28:20 -- common/autotest_common.sh@829 -- # '[' -z 82839 ']' 00:14:14.841 13:28:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:14.841 13:28:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:14.841 13:28:20 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:14.841 13:28:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:14.841 13:28:20 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:14.841 13:28:20 -- nvmf/common.sh@520 -- # config=() 00:14:14.841 13:28:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:14.841 13:28:20 -- nvmf/common.sh@520 -- # local subsystem config 00:14:14.841 13:28:20 -- common/autotest_common.sh@10 -- # set +x 00:14:14.841 13:28:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:14.841 13:28:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:14.841 { 00:14:14.841 "params": { 00:14:14.841 "name": "Nvme$subsystem", 00:14:14.841 "trtype": "$TEST_TRANSPORT", 00:14:14.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:14.841 "adrfam": "ipv4", 00:14:14.841 "trsvcid": "$NVMF_PORT", 00:14:14.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:14.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:14.841 "hdgst": ${hdgst:-false}, 00:14:14.841 "ddgst": ${ddgst:-false} 00:14:14.841 }, 00:14:14.841 "method": "bdev_nvme_attach_controller" 00:14:14.841 } 00:14:14.841 EOF 00:14:14.841 )") 00:14:14.841 13:28:20 -- nvmf/common.sh@542 -- # cat 00:14:14.841 13:28:20 -- nvmf/common.sh@544 -- # jq . 00:14:14.841 13:28:20 -- nvmf/common.sh@545 -- # IFS=, 00:14:14.841 13:28:20 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:14.841 "params": { 00:14:14.841 "name": "Nvme0", 00:14:14.841 "trtype": "tcp", 00:14:14.841 "traddr": "10.0.0.2", 00:14:14.841 "adrfam": "ipv4", 00:14:14.841 "trsvcid": "4420", 00:14:14.841 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:14.841 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:14.841 "hdgst": false, 00:14:14.841 "ddgst": false 00:14:14.841 }, 00:14:14.841 "method": "bdev_nvme_attach_controller" 00:14:14.841 }' 00:14:14.841 [2024-12-15 13:28:20.481018] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:14.841 [2024-12-15 13:28:20.481109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82839 ] 00:14:15.100 [2024-12-15 13:28:20.620951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.100 [2024-12-15 13:28:20.683657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.359 Running I/O for 10 seconds... 00:14:15.926 13:28:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:15.926 13:28:21 -- common/autotest_common.sh@862 -- # return 0 00:14:15.926 13:28:21 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:15.926 13:28:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.926 13:28:21 -- common/autotest_common.sh@10 -- # set +x 00:14:15.926 13:28:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.926 13:28:21 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:15.926 13:28:21 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:15.926 13:28:21 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:15.926 13:28:21 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:15.926 13:28:21 -- target/host_management.sh@52 -- # local ret=1 00:14:15.926 13:28:21 -- target/host_management.sh@53 -- # local i 00:14:15.926 13:28:21 -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:15.926 13:28:21 -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:15.926 13:28:21 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:15.926 13:28:21 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:15.926 13:28:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.926 13:28:21 -- common/autotest_common.sh@10 -- # set +x 00:14:15.926 13:28:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.926 13:28:21 -- target/host_management.sh@55 -- # read_io_count=2513 00:14:15.926 13:28:21 -- target/host_management.sh@58 -- # '[' 2513 -ge 100 ']' 00:14:15.926 13:28:21 -- target/host_management.sh@59 -- # ret=0 00:14:15.926 13:28:21 -- target/host_management.sh@60 -- # break 00:14:15.926 13:28:21 -- target/host_management.sh@64 -- # return 0 00:14:15.926 13:28:21 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:15.926 13:28:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.926 13:28:21 -- common/autotest_common.sh@10 -- # set +x 00:14:15.926 [2024-12-15 13:28:21.591471] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aae70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.591538] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aae70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.591549] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aae70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.591557] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aae70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.591565] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aae70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.591573] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aae70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.591580] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aae70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.591615] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aae70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.591624] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aae70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.591633] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aae70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.591641] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aae70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.591650] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aae70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.591658] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aae70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.591667] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aae70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.591674] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aae70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.591682] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aae70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.591691] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aae70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.591699] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aae70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.591707] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aae70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.591715] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aae70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.591723] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aae70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.591731] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aae70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.591739] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aae70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.591746] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aae70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.591754] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aae70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.591762] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aae70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.591771] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aae70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.591779] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aae70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.591788] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aae70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.591796] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aae70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.591804] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aae70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.591812] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aae70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.595523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:15.926 [2024-12-15 13:28:21.595562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.926 [2024-12-15 13:28:21.595576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:15.926 [2024-12-15 13:28:21.595596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.926 [2024-12-15 13:28:21.595608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:15.926 [2024-12-15 13:28:21.595617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.926 [2024-12-15 13:28:21.595627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:15.926 [2024-12-15 13:28:21.595635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.926 [2024-12-15 13:28:21.595645] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc7a70 is same with the state(5) to be set 00:14:15.926 [2024-12-15 13:28:21.595709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.595723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 [2024-12-15 13:28:21.595740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.595757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 [2024-12-15 13:28:21.595768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.595777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 [2024-12-15 13:28:21.595788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.595797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 [2024-12-15 13:28:21.595808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.595816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 [2024-12-15 13:28:21.595827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.595836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 [2024-12-15 13:28:21.595847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.595856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 [2024-12-15 13:28:21.595866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.595875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 [2024-12-15 13:28:21.595886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.595895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 [2024-12-15 13:28:21.595905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.595914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 [2024-12-15 13:28:21.595924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.595933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 [2024-12-15 13:28:21.595944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.595953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 [2024-12-15 13:28:21.595964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.595973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 [2024-12-15 13:28:21.595984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.595993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 [2024-12-15 13:28:21.596004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.596013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 [2024-12-15 13:28:21.596024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.596033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 [2024-12-15 13:28:21.596048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.596057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 [2024-12-15 13:28:21.596068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.596077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 [2024-12-15 13:28:21.596087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 13:28:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.927 [2024-12-15 13:28:21.596096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 [2024-12-15 13:28:21.596107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.596116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 [2024-12-15 13:28:21.596127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.596136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 [2024-12-15 13:28:21.596146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.596155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 [2024-12-15 13:28:21.596165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.596174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 [2024-12-15 13:28:21.596184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.596193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 [2024-12-15 13:28:21.596203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.596212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 [2024-12-15 13:28:21.596222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.596230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 [2024-12-15 13:28:21.596241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.596249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 [2024-12-15 13:28:21.596260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.596269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 [2024-12-15 13:28:21.596279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.596288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 [2024-12-15 13:28:21.596299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.596308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 [2024-12-15 13:28:21.596318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.596326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 13:28:21 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:15.927 [2024-12-15 13:28:21.596337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.596346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 [2024-12-15 13:28:21.596358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.596367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.927 [2024-12-15 13:28:21.596377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.927 [2024-12-15 13:28:21.596385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.928 [2024-12-15 13:28:21.596396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.928 [2024-12-15 13:28:21.596404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.928 [2024-12-15 13:28:21.596415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.928 [2024-12-15 13:28:21.596424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.928 [2024-12-15 13:28:21.596435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.928 [2024-12-15 13:28:21.596443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.928 [2024-12-15 13:28:21.596454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.928 [2024-12-15 13:28:21.596463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.928 [2024-12-15 13:28:21.596473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.928 [2024-12-15 13:28:21.596482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.928 [2024-12-15 13:28:21.596492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.928 [2024-12-15 13:28:21.596501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.928 [2024-12-15 13:28:21.596512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.928 [2024-12-15 13:28:21.596520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.928 [2024-12-15 13:28:21.596531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.928 13:28:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.928 [2024-12-15 13:28:21.596539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.928 [2024-12-15 13:28:21.596550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.928 [2024-12-15 13:28:21.596558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.928 [2024-12-15 13:28:21.596569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.928 [2024-12-15 13:28:21.596578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.928 [2024-12-15 13:28:21.596600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.928 [2024-12-15 13:28:21.596610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.928 [2024-12-15 13:28:21.596621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.928 [2024-12-15 13:28:21.596630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.928 [2024-12-15 13:28:21.596640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.928 [2024-12-15 13:28:21.596649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.928 [2024-12-15 13:28:21.596659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.928 [2024-12-15 13:28:21.596668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.928 [2024-12-15 13:28:21.596681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.928 [2024-12-15 13:28:21.596690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.928 [2024-12-15 13:28:21.596701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.928 [2024-12-15 13:28:21.596710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.928 [2024-12-15 13:28:21.596721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.928 [2024-12-15 13:28:21.596730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.928 [2024-12-15 13:28:21.596740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.928 13:28:21 -- common/autotest_common.sh@10 -- # set +x 00:14:15.928 [2024-12-15 13:28:21.596749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.928 [2024-12-15 13:28:21.596760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.928 [2024-12-15 13:28:21.596769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.928 [2024-12-15 13:28:21.596780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.928 [2024-12-15 13:28:21.596788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.928 [2024-12-15 13:28:21.596799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.928 [2024-12-15 13:28:21.596807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.928 [2024-12-15 13:28:21.596817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.928 [2024-12-15 13:28:21.596826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.928 [2024-12-15 13:28:21.596836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.928 [2024-12-15 13:28:21.596845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.928 [2024-12-15 13:28:21.596856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.928 [2024-12-15 13:28:21.596864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.928 [2024-12-15 13:28:21.596875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.928 [2024-12-15 13:28:21.596884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.928 [2024-12-15 13:28:21.596894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.928 [2024-12-15 13:28:21.596903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.928 [2024-12-15 13:28:21.596913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.928 [2024-12-15 13:28:21.596922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.928 [2024-12-15 13:28:21.596932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.928 [2024-12-15 13:28:21.596941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.928 [2024-12-15 13:28:21.596951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.928 [2024-12-15 13:28:21.596962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.928 [2024-12-15 13:28:21.596973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:15.928 [2024-12-15 13:28:21.596982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:15.928 task offset: 81024 on job bdev=Nvme0n1 fails 00:14:15.928 00:14:15.928 Latency(us) 00:14:15.928 [2024-12-15T13:28:21.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.928 [2024-12-15T13:28:21.618Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:15.928 [2024-12-15T13:28:21.618Z] Job: Nvme0n1 ended in about 0.74 seconds with error 00:14:15.928 Verification LBA range: start 0x0 length 0x400 00:14:15.928 Nvme0n1 : 0.74 3612.80 225.80 86.96 0.00 17022.94 1921.40 22639.71 00:14:15.928 [2024-12-15T13:28:21.618Z] =================================================================================================================== 00:14:15.928 [2024-12-15T13:28:21.618Z] Total : 3612.80 225.80 86.96 0.00 17022.94 1921.40 22639.71 00:14:15.928 [2024-12-15 13:28:21.597063] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc6bdc0 was disconnected and freed. reset controller. 00:14:15.929 [2024-12-15 13:28:21.598186] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:15.929 [2024-12-15 13:28:21.600117] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:15.929 [2024-12-15 13:28:21.600138] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc7a70 (9): Bad file descriptor 00:14:15.929 13:28:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.929 13:28:21 -- target/host_management.sh@87 -- # sleep 1 00:14:15.929 [2024-12-15 13:28:21.610747] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:17.305 13:28:22 -- target/host_management.sh@91 -- # kill -9 82839 00:14:17.305 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (82839) - No such process 00:14:17.305 13:28:22 -- target/host_management.sh@91 -- # true 00:14:17.305 13:28:22 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:17.305 13:28:22 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:17.305 13:28:22 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:17.305 13:28:22 -- nvmf/common.sh@520 -- # config=() 00:14:17.305 13:28:22 -- nvmf/common.sh@520 -- # local subsystem config 00:14:17.305 13:28:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:17.305 13:28:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:17.305 { 00:14:17.305 "params": { 00:14:17.305 "name": "Nvme$subsystem", 00:14:17.305 "trtype": "$TEST_TRANSPORT", 00:14:17.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:17.305 "adrfam": "ipv4", 00:14:17.305 "trsvcid": "$NVMF_PORT", 00:14:17.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:17.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:17.305 "hdgst": ${hdgst:-false}, 00:14:17.305 "ddgst": ${ddgst:-false} 00:14:17.305 }, 00:14:17.305 "method": "bdev_nvme_attach_controller" 00:14:17.305 } 00:14:17.305 EOF 00:14:17.305 )") 00:14:17.305 13:28:22 -- nvmf/common.sh@542 -- # cat 00:14:17.305 13:28:22 -- nvmf/common.sh@544 -- # jq . 00:14:17.305 13:28:22 -- nvmf/common.sh@545 -- # IFS=, 00:14:17.305 13:28:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:17.305 "params": { 00:14:17.305 "name": "Nvme0", 00:14:17.305 "trtype": "tcp", 00:14:17.305 "traddr": "10.0.0.2", 00:14:17.305 "adrfam": "ipv4", 00:14:17.305 "trsvcid": "4420", 00:14:17.305 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:17.305 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:17.305 "hdgst": false, 00:14:17.305 "ddgst": false 00:14:17.305 }, 00:14:17.305 "method": "bdev_nvme_attach_controller" 00:14:17.305 }' 00:14:17.305 [2024-12-15 13:28:22.663806] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:17.305 [2024-12-15 13:28:22.663888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82889 ] 00:14:17.305 [2024-12-15 13:28:22.803475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.305 [2024-12-15 13:28:22.857282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.564 Running I/O for 1 seconds... 00:14:18.527 00:14:18.527 Latency(us) 00:14:18.527 [2024-12-15T13:28:24.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.527 [2024-12-15T13:28:24.217Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:18.527 Verification LBA range: start 0x0 length 0x400 00:14:18.527 Nvme0n1 : 1.01 3865.76 241.61 0.00 0.00 16276.01 1355.40 22520.55 00:14:18.527 [2024-12-15T13:28:24.217Z] =================================================================================================================== 00:14:18.527 [2024-12-15T13:28:24.217Z] Total : 3865.76 241.61 0.00 0.00 16276.01 1355.40 22520.55 00:14:18.785 13:28:24 -- target/host_management.sh@101 -- # stoptarget 00:14:18.785 13:28:24 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:18.785 13:28:24 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:14:18.785 13:28:24 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:18.785 13:28:24 -- target/host_management.sh@40 -- # nvmftestfini 00:14:18.785 13:28:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:18.785 13:28:24 -- nvmf/common.sh@116 -- # sync 00:14:18.785 13:28:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:18.785 13:28:24 -- nvmf/common.sh@119 -- # set +e 00:14:18.785 13:28:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:18.785 13:28:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:18.785 rmmod nvme_tcp 00:14:18.785 rmmod nvme_fabrics 00:14:18.785 rmmod nvme_keyring 00:14:18.785 13:28:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:18.785 13:28:24 -- nvmf/common.sh@123 -- # set -e 00:14:18.785 13:28:24 -- nvmf/common.sh@124 -- # return 0 00:14:18.785 13:28:24 -- nvmf/common.sh@477 -- # '[' -n 82767 ']' 00:14:18.785 13:28:24 -- nvmf/common.sh@478 -- # killprocess 82767 00:14:18.785 13:28:24 -- common/autotest_common.sh@936 -- # '[' -z 82767 ']' 00:14:18.785 13:28:24 -- common/autotest_common.sh@940 -- # kill -0 82767 00:14:18.785 13:28:24 -- common/autotest_common.sh@941 -- # uname 00:14:18.785 13:28:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:18.785 13:28:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82767 00:14:18.785 13:28:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:18.785 13:28:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:18.785 13:28:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82767' 00:14:18.785 killing process with pid 82767 00:14:18.785 13:28:24 -- common/autotest_common.sh@955 -- # kill 82767 00:14:18.785 13:28:24 -- common/autotest_common.sh@960 -- # wait 82767 00:14:19.043 [2024-12-15 13:28:24.550994] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:19.043 13:28:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:19.043 13:28:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:19.043 13:28:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:19.043 13:28:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:19.043 13:28:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:19.043 13:28:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.044 13:28:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.044 13:28:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.044 13:28:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:19.044 00:14:19.044 real 0m5.444s 00:14:19.044 user 0m22.948s 00:14:19.044 sys 0m1.319s 00:14:19.044 13:28:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:19.044 13:28:24 -- common/autotest_common.sh@10 -- # set +x 00:14:19.044 ************************************ 00:14:19.044 END TEST nvmf_host_management 00:14:19.044 ************************************ 00:14:19.044 13:28:24 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:14:19.044 00:14:19.044 real 0m6.060s 00:14:19.044 user 0m23.127s 00:14:19.044 sys 0m1.573s 00:14:19.044 13:28:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:19.044 13:28:24 -- common/autotest_common.sh@10 -- # set +x 00:14:19.044 ************************************ 00:14:19.044 END TEST nvmf_host_management 00:14:19.044 ************************************ 00:14:19.044 13:28:24 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:19.044 13:28:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:19.044 13:28:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:19.044 13:28:24 -- common/autotest_common.sh@10 -- # set +x 00:14:19.044 ************************************ 00:14:19.044 START TEST nvmf_lvol 00:14:19.044 ************************************ 00:14:19.044 13:28:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:19.341 * Looking for test storage... 00:14:19.341 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:19.341 13:28:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:19.341 13:28:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:19.341 13:28:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:19.341 13:28:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:19.341 13:28:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:19.341 13:28:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:19.341 13:28:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:19.341 13:28:24 -- scripts/common.sh@335 -- # IFS=.-: 00:14:19.341 13:28:24 -- scripts/common.sh@335 -- # read -ra ver1 00:14:19.341 13:28:24 -- scripts/common.sh@336 -- # IFS=.-: 00:14:19.341 13:28:24 -- scripts/common.sh@336 -- # read -ra ver2 00:14:19.341 13:28:24 -- scripts/common.sh@337 -- # local 'op=<' 00:14:19.341 13:28:24 -- scripts/common.sh@339 -- # ver1_l=2 00:14:19.341 13:28:24 -- scripts/common.sh@340 -- # ver2_l=1 00:14:19.341 13:28:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:19.341 13:28:24 -- scripts/common.sh@343 -- # case "$op" in 00:14:19.341 13:28:24 -- scripts/common.sh@344 -- # : 1 00:14:19.341 13:28:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:19.341 13:28:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:19.341 13:28:24 -- scripts/common.sh@364 -- # decimal 1 00:14:19.341 13:28:24 -- scripts/common.sh@352 -- # local d=1 00:14:19.341 13:28:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:19.341 13:28:24 -- scripts/common.sh@354 -- # echo 1 00:14:19.341 13:28:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:19.341 13:28:24 -- scripts/common.sh@365 -- # decimal 2 00:14:19.341 13:28:24 -- scripts/common.sh@352 -- # local d=2 00:14:19.341 13:28:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:19.341 13:28:24 -- scripts/common.sh@354 -- # echo 2 00:14:19.341 13:28:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:19.341 13:28:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:19.341 13:28:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:19.341 13:28:24 -- scripts/common.sh@367 -- # return 0 00:14:19.341 13:28:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:19.341 13:28:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:19.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.341 --rc genhtml_branch_coverage=1 00:14:19.341 --rc genhtml_function_coverage=1 00:14:19.341 --rc genhtml_legend=1 00:14:19.341 --rc geninfo_all_blocks=1 00:14:19.341 --rc geninfo_unexecuted_blocks=1 00:14:19.341 00:14:19.341 ' 00:14:19.341 13:28:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:19.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.341 --rc genhtml_branch_coverage=1 00:14:19.341 --rc genhtml_function_coverage=1 00:14:19.341 --rc genhtml_legend=1 00:14:19.341 --rc geninfo_all_blocks=1 00:14:19.341 --rc geninfo_unexecuted_blocks=1 00:14:19.341 00:14:19.341 ' 00:14:19.341 13:28:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:19.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.341 --rc genhtml_branch_coverage=1 00:14:19.341 --rc genhtml_function_coverage=1 00:14:19.341 --rc genhtml_legend=1 00:14:19.341 --rc geninfo_all_blocks=1 00:14:19.341 --rc geninfo_unexecuted_blocks=1 00:14:19.341 00:14:19.341 ' 00:14:19.341 13:28:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:19.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.341 --rc genhtml_branch_coverage=1 00:14:19.341 --rc genhtml_function_coverage=1 00:14:19.341 --rc genhtml_legend=1 00:14:19.341 --rc geninfo_all_blocks=1 00:14:19.341 --rc geninfo_unexecuted_blocks=1 00:14:19.341 00:14:19.341 ' 00:14:19.341 13:28:24 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:19.341 13:28:24 -- nvmf/common.sh@7 -- # uname -s 00:14:19.341 13:28:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.341 13:28:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.341 13:28:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.341 13:28:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.341 13:28:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.341 13:28:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.341 13:28:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.341 13:28:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.341 13:28:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.341 13:28:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.341 13:28:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:14:19.341 13:28:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:14:19.341 13:28:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.341 13:28:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.341 13:28:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:19.341 13:28:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:19.341 13:28:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.341 13:28:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.341 13:28:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.341 13:28:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.341 13:28:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.341 13:28:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.341 13:28:24 -- paths/export.sh@5 -- # export PATH 00:14:19.342 13:28:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.342 13:28:24 -- nvmf/common.sh@46 -- # : 0 00:14:19.342 13:28:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:19.342 13:28:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:19.342 13:28:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:19.342 13:28:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.342 13:28:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.342 13:28:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:19.342 13:28:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:19.342 13:28:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:19.342 13:28:24 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:19.342 13:28:24 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:19.342 13:28:24 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:19.342 13:28:24 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:19.342 13:28:24 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:19.342 13:28:24 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:19.342 13:28:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:19.342 13:28:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:19.342 13:28:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:19.342 13:28:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:19.342 13:28:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:19.342 13:28:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.342 13:28:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.342 13:28:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.342 13:28:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:19.342 13:28:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:19.342 13:28:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:19.342 13:28:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:19.342 13:28:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:19.342 13:28:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:19.342 13:28:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:19.342 13:28:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:19.342 13:28:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:19.342 13:28:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:19.342 13:28:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:19.342 13:28:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:19.342 13:28:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:19.342 13:28:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:19.342 13:28:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:19.342 13:28:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:19.342 13:28:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:19.342 13:28:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:19.342 13:28:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:19.342 13:28:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:19.342 Cannot find device "nvmf_tgt_br" 00:14:19.342 13:28:24 -- nvmf/common.sh@154 -- # true 00:14:19.342 13:28:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:19.342 Cannot find device "nvmf_tgt_br2" 00:14:19.342 13:28:24 -- nvmf/common.sh@155 -- # true 00:14:19.342 13:28:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:19.342 13:28:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:19.342 Cannot find device "nvmf_tgt_br" 00:14:19.342 13:28:24 -- nvmf/common.sh@157 -- # true 00:14:19.342 13:28:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:19.342 Cannot find device "nvmf_tgt_br2" 00:14:19.342 13:28:24 -- nvmf/common.sh@158 -- # true 00:14:19.342 13:28:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:19.342 13:28:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:19.342 13:28:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:19.342 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:19.342 13:28:24 -- nvmf/common.sh@161 -- # true 00:14:19.342 13:28:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:19.342 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:19.342 13:28:24 -- nvmf/common.sh@162 -- # true 00:14:19.342 13:28:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:19.342 13:28:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:19.342 13:28:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:19.342 13:28:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:19.342 13:28:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:19.604 13:28:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:19.604 13:28:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:19.604 13:28:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:19.604 13:28:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:19.604 13:28:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:19.604 13:28:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:19.604 13:28:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:19.604 13:28:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:19.604 13:28:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:19.604 13:28:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:19.604 13:28:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:19.604 13:28:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:19.604 13:28:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:19.604 13:28:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:19.604 13:28:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:19.604 13:28:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:19.604 13:28:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:19.604 13:28:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:19.604 13:28:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:19.604 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:19.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:14:19.604 00:14:19.604 --- 10.0.0.2 ping statistics --- 00:14:19.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.604 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:14:19.604 13:28:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:19.604 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:19.604 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:14:19.604 00:14:19.604 --- 10.0.0.3 ping statistics --- 00:14:19.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.604 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:14:19.604 13:28:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:19.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:19.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:19.604 00:14:19.604 --- 10.0.0.1 ping statistics --- 00:14:19.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.604 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:19.604 13:28:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:19.604 13:28:25 -- nvmf/common.sh@421 -- # return 0 00:14:19.604 13:28:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:19.604 13:28:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:19.604 13:28:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:19.604 13:28:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:19.604 13:28:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:19.604 13:28:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:19.604 13:28:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:19.604 13:28:25 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:19.604 13:28:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:19.604 13:28:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:19.604 13:28:25 -- common/autotest_common.sh@10 -- # set +x 00:14:19.604 13:28:25 -- nvmf/common.sh@469 -- # nvmfpid=83120 00:14:19.604 13:28:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:19.604 13:28:25 -- nvmf/common.sh@470 -- # waitforlisten 83120 00:14:19.604 13:28:25 -- common/autotest_common.sh@829 -- # '[' -z 83120 ']' 00:14:19.604 13:28:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.604 13:28:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:19.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.604 13:28:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.604 13:28:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:19.604 13:28:25 -- common/autotest_common.sh@10 -- # set +x 00:14:19.604 [2024-12-15 13:28:25.219722] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:19.604 [2024-12-15 13:28:25.219820] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.867 [2024-12-15 13:28:25.349372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:19.867 [2024-12-15 13:28:25.412879] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:19.867 [2024-12-15 13:28:25.413026] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:19.867 [2024-12-15 13:28:25.413038] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:19.867 [2024-12-15 13:28:25.413046] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:19.867 [2024-12-15 13:28:25.413205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.867 [2024-12-15 13:28:25.413347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:19.867 [2024-12-15 13:28:25.413352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.803 13:28:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:20.803 13:28:26 -- common/autotest_common.sh@862 -- # return 0 00:14:20.803 13:28:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:20.803 13:28:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:20.803 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:14:20.803 13:28:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.803 13:28:26 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:20.803 [2024-12-15 13:28:26.459219] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:20.803 13:28:26 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:21.061 13:28:26 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:21.061 13:28:26 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:21.320 13:28:26 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:21.320 13:28:26 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:21.578 13:28:27 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:21.837 13:28:27 -- target/nvmf_lvol.sh@29 -- # lvs=1b39f0ff-9ff4-479d-aa42-3ed4bab4aacd 00:14:21.837 13:28:27 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1b39f0ff-9ff4-479d-aa42-3ed4bab4aacd lvol 20 00:14:22.095 13:28:27 -- target/nvmf_lvol.sh@32 -- # lvol=dbbd0bce-d855-4483-bfc9-1d8080810fc7 00:14:22.095 13:28:27 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:22.354 13:28:28 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dbbd0bce-d855-4483-bfc9-1d8080810fc7 00:14:22.613 13:28:28 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:22.871 [2024-12-15 13:28:28.418586] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:22.871 13:28:28 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:23.130 13:28:28 -- target/nvmf_lvol.sh@42 -- # perf_pid=83262 00:14:23.130 13:28:28 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:23.130 13:28:28 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:24.066 13:28:29 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot dbbd0bce-d855-4483-bfc9-1d8080810fc7 MY_SNAPSHOT 00:14:24.633 13:28:30 -- target/nvmf_lvol.sh@47 -- # snapshot=81d4696f-e27c-42e0-82bc-932fff5d37a1 00:14:24.633 13:28:30 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize dbbd0bce-d855-4483-bfc9-1d8080810fc7 30 00:14:24.892 13:28:30 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 81d4696f-e27c-42e0-82bc-932fff5d37a1 MY_CLONE 00:14:25.150 13:28:30 -- target/nvmf_lvol.sh@49 -- # clone=8b9b2f3c-55f5-49bf-a612-0812e88bb464 00:14:25.150 13:28:30 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 8b9b2f3c-55f5-49bf-a612-0812e88bb464 00:14:25.718 13:28:31 -- target/nvmf_lvol.sh@53 -- # wait 83262 00:14:33.836 Initializing NVMe Controllers 00:14:33.836 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:33.836 Controller IO queue size 128, less than required. 00:14:33.836 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:33.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:33.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:33.836 Initialization complete. Launching workers. 00:14:33.836 ======================================================== 00:14:33.836 Latency(us) 00:14:33.836 Device Information : IOPS MiB/s Average min max 00:14:33.836 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12324.10 48.14 10388.18 2571.32 81710.69 00:14:33.836 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12266.30 47.92 10436.48 3127.10 59927.24 00:14:33.836 ======================================================== 00:14:33.836 Total : 24590.40 96.06 10412.27 2571.32 81710.69 00:14:33.836 00:14:33.836 13:28:39 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:33.836 13:28:39 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete dbbd0bce-d855-4483-bfc9-1d8080810fc7 00:14:34.095 13:28:39 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1b39f0ff-9ff4-479d-aa42-3ed4bab4aacd 00:14:34.354 13:28:39 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:34.354 13:28:39 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:34.354 13:28:39 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:34.354 13:28:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:34.354 13:28:39 -- nvmf/common.sh@116 -- # sync 00:14:34.354 13:28:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:34.354 13:28:39 -- nvmf/common.sh@119 -- # set +e 00:14:34.354 13:28:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:34.354 13:28:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:34.354 rmmod nvme_tcp 00:14:34.354 rmmod nvme_fabrics 00:14:34.354 rmmod nvme_keyring 00:14:34.354 13:28:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:34.354 13:28:39 -- nvmf/common.sh@123 -- # set -e 00:14:34.354 13:28:39 -- nvmf/common.sh@124 -- # return 0 00:14:34.354 13:28:39 -- nvmf/common.sh@477 -- # '[' -n 83120 ']' 00:14:34.354 13:28:39 -- nvmf/common.sh@478 -- # killprocess 83120 00:14:34.354 13:28:39 -- common/autotest_common.sh@936 -- # '[' -z 83120 ']' 00:14:34.354 13:28:39 -- common/autotest_common.sh@940 -- # kill -0 83120 00:14:34.354 13:28:39 -- common/autotest_common.sh@941 -- # uname 00:14:34.354 13:28:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:34.354 13:28:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83120 00:14:34.354 killing process with pid 83120 00:14:34.354 13:28:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:34.354 13:28:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:34.354 13:28:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83120' 00:14:34.354 13:28:39 -- common/autotest_common.sh@955 -- # kill 83120 00:14:34.354 13:28:39 -- common/autotest_common.sh@960 -- # wait 83120 00:14:34.612 13:28:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:34.612 13:28:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:34.612 13:28:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:34.612 13:28:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:34.612 13:28:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:34.612 13:28:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.612 13:28:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:34.612 13:28:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.612 13:28:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:34.612 00:14:34.612 real 0m15.518s 00:14:34.612 user 1m5.296s 00:14:34.612 sys 0m3.725s 00:14:34.612 13:28:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:34.612 ************************************ 00:14:34.612 13:28:40 -- common/autotest_common.sh@10 -- # set +x 00:14:34.612 END TEST nvmf_lvol 00:14:34.612 ************************************ 00:14:34.612 13:28:40 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:34.612 13:28:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:34.612 13:28:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:34.612 13:28:40 -- common/autotest_common.sh@10 -- # set +x 00:14:34.612 ************************************ 00:14:34.612 START TEST nvmf_lvs_grow 00:14:34.612 ************************************ 00:14:34.613 13:28:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:34.872 * Looking for test storage... 00:14:34.872 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:34.872 13:28:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:34.872 13:28:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:34.872 13:28:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:34.872 13:28:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:34.872 13:28:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:34.872 13:28:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:34.872 13:28:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:34.872 13:28:40 -- scripts/common.sh@335 -- # IFS=.-: 00:14:34.872 13:28:40 -- scripts/common.sh@335 -- # read -ra ver1 00:14:34.872 13:28:40 -- scripts/common.sh@336 -- # IFS=.-: 00:14:34.872 13:28:40 -- scripts/common.sh@336 -- # read -ra ver2 00:14:34.872 13:28:40 -- scripts/common.sh@337 -- # local 'op=<' 00:14:34.872 13:28:40 -- scripts/common.sh@339 -- # ver1_l=2 00:14:34.872 13:28:40 -- scripts/common.sh@340 -- # ver2_l=1 00:14:34.872 13:28:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:34.872 13:28:40 -- scripts/common.sh@343 -- # case "$op" in 00:14:34.872 13:28:40 -- scripts/common.sh@344 -- # : 1 00:14:34.872 13:28:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:34.872 13:28:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:34.872 13:28:40 -- scripts/common.sh@364 -- # decimal 1 00:14:34.872 13:28:40 -- scripts/common.sh@352 -- # local d=1 00:14:34.872 13:28:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:34.872 13:28:40 -- scripts/common.sh@354 -- # echo 1 00:14:34.872 13:28:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:34.872 13:28:40 -- scripts/common.sh@365 -- # decimal 2 00:14:34.872 13:28:40 -- scripts/common.sh@352 -- # local d=2 00:14:34.872 13:28:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:34.872 13:28:40 -- scripts/common.sh@354 -- # echo 2 00:14:34.872 13:28:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:34.872 13:28:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:34.872 13:28:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:34.872 13:28:40 -- scripts/common.sh@367 -- # return 0 00:14:34.872 13:28:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:34.872 13:28:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:34.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.872 --rc genhtml_branch_coverage=1 00:14:34.872 --rc genhtml_function_coverage=1 00:14:34.872 --rc genhtml_legend=1 00:14:34.872 --rc geninfo_all_blocks=1 00:14:34.872 --rc geninfo_unexecuted_blocks=1 00:14:34.872 00:14:34.872 ' 00:14:34.872 13:28:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:34.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.872 --rc genhtml_branch_coverage=1 00:14:34.872 --rc genhtml_function_coverage=1 00:14:34.872 --rc genhtml_legend=1 00:14:34.872 --rc geninfo_all_blocks=1 00:14:34.872 --rc geninfo_unexecuted_blocks=1 00:14:34.872 00:14:34.872 ' 00:14:34.872 13:28:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:34.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.872 --rc genhtml_branch_coverage=1 00:14:34.872 --rc genhtml_function_coverage=1 00:14:34.872 --rc genhtml_legend=1 00:14:34.872 --rc geninfo_all_blocks=1 00:14:34.872 --rc geninfo_unexecuted_blocks=1 00:14:34.872 00:14:34.872 ' 00:14:34.872 13:28:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:34.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.872 --rc genhtml_branch_coverage=1 00:14:34.872 --rc genhtml_function_coverage=1 00:14:34.872 --rc genhtml_legend=1 00:14:34.872 --rc geninfo_all_blocks=1 00:14:34.872 --rc geninfo_unexecuted_blocks=1 00:14:34.872 00:14:34.872 ' 00:14:34.872 13:28:40 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:34.872 13:28:40 -- nvmf/common.sh@7 -- # uname -s 00:14:34.872 13:28:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.872 13:28:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.872 13:28:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.872 13:28:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.872 13:28:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.872 13:28:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.872 13:28:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.872 13:28:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.872 13:28:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.872 13:28:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.872 13:28:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:14:34.872 13:28:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:14:34.872 13:28:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.872 13:28:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.872 13:28:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:34.872 13:28:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:34.872 13:28:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.872 13:28:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.872 13:28:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.872 13:28:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.872 13:28:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.872 13:28:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.872 13:28:40 -- paths/export.sh@5 -- # export PATH 00:14:34.872 13:28:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.872 13:28:40 -- nvmf/common.sh@46 -- # : 0 00:14:34.872 13:28:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:34.872 13:28:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:34.872 13:28:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:34.872 13:28:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.872 13:28:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.872 13:28:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:34.872 13:28:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:34.872 13:28:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:34.872 13:28:40 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:34.872 13:28:40 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:34.872 13:28:40 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:34.872 13:28:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:34.872 13:28:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:34.872 13:28:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:34.873 13:28:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:34.873 13:28:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:34.873 13:28:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.873 13:28:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:34.873 13:28:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.873 13:28:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:34.873 13:28:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:34.873 13:28:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:34.873 13:28:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:34.873 13:28:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:34.873 13:28:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:34.873 13:28:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:34.873 13:28:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:34.873 13:28:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:34.873 13:28:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:34.873 13:28:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:34.873 13:28:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:34.873 13:28:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:34.873 13:28:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:34.873 13:28:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:34.873 13:28:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:34.873 13:28:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:34.873 13:28:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:34.873 13:28:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:34.873 13:28:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:34.873 Cannot find device "nvmf_tgt_br" 00:14:34.873 13:28:40 -- nvmf/common.sh@154 -- # true 00:14:34.873 13:28:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:34.873 Cannot find device "nvmf_tgt_br2" 00:14:34.873 13:28:40 -- nvmf/common.sh@155 -- # true 00:14:34.873 13:28:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:34.873 13:28:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:34.873 Cannot find device "nvmf_tgt_br" 00:14:34.873 13:28:40 -- nvmf/common.sh@157 -- # true 00:14:34.873 13:28:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:34.873 Cannot find device "nvmf_tgt_br2" 00:14:34.873 13:28:40 -- nvmf/common.sh@158 -- # true 00:14:34.873 13:28:40 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:35.132 13:28:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:35.132 13:28:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:35.132 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:35.132 13:28:40 -- nvmf/common.sh@161 -- # true 00:14:35.132 13:28:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:35.132 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:35.132 13:28:40 -- nvmf/common.sh@162 -- # true 00:14:35.132 13:28:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:35.132 13:28:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:35.132 13:28:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:35.132 13:28:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:35.132 13:28:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:35.132 13:28:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:35.132 13:28:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:35.132 13:28:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:35.132 13:28:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:35.132 13:28:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:35.132 13:28:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:35.132 13:28:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:35.132 13:28:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:35.132 13:28:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:35.132 13:28:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:35.132 13:28:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:35.132 13:28:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:35.132 13:28:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:35.132 13:28:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:35.132 13:28:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:35.132 13:28:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:35.132 13:28:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:35.132 13:28:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:35.132 13:28:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:35.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:35.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:14:35.132 00:14:35.132 --- 10.0.0.2 ping statistics --- 00:14:35.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.132 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:14:35.132 13:28:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:35.132 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:35.132 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:14:35.132 00:14:35.132 --- 10.0.0.3 ping statistics --- 00:14:35.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.132 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:14:35.132 13:28:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:35.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:35.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:14:35.132 00:14:35.132 --- 10.0.0.1 ping statistics --- 00:14:35.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.132 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:14:35.132 13:28:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:35.132 13:28:40 -- nvmf/common.sh@421 -- # return 0 00:14:35.132 13:28:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:35.132 13:28:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:35.132 13:28:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:35.132 13:28:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:35.132 13:28:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:35.132 13:28:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:35.132 13:28:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:35.132 13:28:40 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:35.132 13:28:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:35.132 13:28:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:35.132 13:28:40 -- common/autotest_common.sh@10 -- # set +x 00:14:35.132 13:28:40 -- nvmf/common.sh@469 -- # nvmfpid=83635 00:14:35.132 13:28:40 -- nvmf/common.sh@470 -- # waitforlisten 83635 00:14:35.391 13:28:40 -- common/autotest_common.sh@829 -- # '[' -z 83635 ']' 00:14:35.391 13:28:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.391 13:28:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:35.391 13:28:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:35.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.391 13:28:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.391 13:28:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:35.391 13:28:40 -- common/autotest_common.sh@10 -- # set +x 00:14:35.391 [2024-12-15 13:28:40.875321] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:35.391 [2024-12-15 13:28:40.875432] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.391 [2024-12-15 13:28:41.014158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.391 [2024-12-15 13:28:41.066169] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:35.391 [2024-12-15 13:28:41.066314] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.391 [2024-12-15 13:28:41.066326] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.391 [2024-12-15 13:28:41.066334] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.391 [2024-12-15 13:28:41.066360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.326 13:28:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:36.326 13:28:41 -- common/autotest_common.sh@862 -- # return 0 00:14:36.326 13:28:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:36.326 13:28:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:36.326 13:28:41 -- common/autotest_common.sh@10 -- # set +x 00:14:36.326 13:28:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:36.326 13:28:41 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:36.584 [2024-12-15 13:28:42.020448] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:36.584 13:28:42 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:36.584 13:28:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:36.584 13:28:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:36.584 13:28:42 -- common/autotest_common.sh@10 -- # set +x 00:14:36.584 ************************************ 00:14:36.584 START TEST lvs_grow_clean 00:14:36.584 ************************************ 00:14:36.584 13:28:42 -- common/autotest_common.sh@1114 -- # lvs_grow 00:14:36.584 13:28:42 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:36.584 13:28:42 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:36.584 13:28:42 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:36.584 13:28:42 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:36.584 13:28:42 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:36.584 13:28:42 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:36.584 13:28:42 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:36.584 13:28:42 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:36.584 13:28:42 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:36.843 13:28:42 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:36.843 13:28:42 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:37.101 13:28:42 -- target/nvmf_lvs_grow.sh@28 -- # lvs=33300c28-5684-4bf7-b45f-91549ee84e60 00:14:37.101 13:28:42 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:37.101 13:28:42 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33300c28-5684-4bf7-b45f-91549ee84e60 00:14:37.360 13:28:42 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:37.360 13:28:42 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:37.360 13:28:42 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 33300c28-5684-4bf7-b45f-91549ee84e60 lvol 150 00:14:37.618 13:28:43 -- target/nvmf_lvs_grow.sh@33 -- # lvol=b53d6d54-b648-48af-8261-8c8275c59121 00:14:37.618 13:28:43 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:37.618 13:28:43 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:37.876 [2024-12-15 13:28:43.473648] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:37.876 [2024-12-15 13:28:43.473715] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:37.876 true 00:14:37.876 13:28:43 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33300c28-5684-4bf7-b45f-91549ee84e60 00:14:37.876 13:28:43 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:38.134 13:28:43 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:38.134 13:28:43 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:38.393 13:28:43 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b53d6d54-b648-48af-8261-8c8275c59121 00:14:38.652 13:28:44 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:38.910 [2024-12-15 13:28:44.398202] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.910 13:28:44 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:39.169 13:28:44 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=83797 00:14:39.169 13:28:44 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:39.169 13:28:44 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:39.169 13:28:44 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 83797 /var/tmp/bdevperf.sock 00:14:39.169 13:28:44 -- common/autotest_common.sh@829 -- # '[' -z 83797 ']' 00:14:39.169 13:28:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:39.169 13:28:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:39.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:39.169 13:28:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:39.169 13:28:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:39.169 13:28:44 -- common/autotest_common.sh@10 -- # set +x 00:14:39.169 [2024-12-15 13:28:44.654229] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:39.169 [2024-12-15 13:28:44.654313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83797 ] 00:14:39.169 [2024-12-15 13:28:44.792536] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.428 [2024-12-15 13:28:44.861913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.991 13:28:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:39.992 13:28:45 -- common/autotest_common.sh@862 -- # return 0 00:14:39.992 13:28:45 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:40.249 Nvme0n1 00:14:40.249 13:28:45 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:40.507 [ 00:14:40.507 { 00:14:40.507 "aliases": [ 00:14:40.507 "b53d6d54-b648-48af-8261-8c8275c59121" 00:14:40.507 ], 00:14:40.507 "assigned_rate_limits": { 00:14:40.507 "r_mbytes_per_sec": 0, 00:14:40.507 "rw_ios_per_sec": 0, 00:14:40.507 "rw_mbytes_per_sec": 0, 00:14:40.507 "w_mbytes_per_sec": 0 00:14:40.507 }, 00:14:40.507 "block_size": 4096, 00:14:40.507 "claimed": false, 00:14:40.507 "driver_specific": { 00:14:40.507 "mp_policy": "active_passive", 00:14:40.507 "nvme": [ 00:14:40.507 { 00:14:40.507 "ctrlr_data": { 00:14:40.507 "ana_reporting": false, 00:14:40.507 "cntlid": 1, 00:14:40.507 "firmware_revision": "24.01.1", 00:14:40.507 "model_number": "SPDK bdev Controller", 00:14:40.507 "multi_ctrlr": true, 00:14:40.507 "oacs": { 00:14:40.507 "firmware": 0, 00:14:40.507 "format": 0, 00:14:40.508 "ns_manage": 0, 00:14:40.508 "security": 0 00:14:40.508 }, 00:14:40.508 "serial_number": "SPDK0", 00:14:40.508 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:40.508 "vendor_id": "0x8086" 00:14:40.508 }, 00:14:40.508 "ns_data": { 00:14:40.508 "can_share": true, 00:14:40.508 "id": 1 00:14:40.508 }, 00:14:40.508 "trid": { 00:14:40.508 "adrfam": "IPv4", 00:14:40.508 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:40.508 "traddr": "10.0.0.2", 00:14:40.508 "trsvcid": "4420", 00:14:40.508 "trtype": "TCP" 00:14:40.508 }, 00:14:40.508 "vs": { 00:14:40.508 "nvme_version": "1.3" 00:14:40.508 } 00:14:40.508 } 00:14:40.508 ] 00:14:40.508 }, 00:14:40.508 "name": "Nvme0n1", 00:14:40.508 "num_blocks": 38912, 00:14:40.508 "product_name": "NVMe disk", 00:14:40.508 "supported_io_types": { 00:14:40.508 "abort": true, 00:14:40.508 "compare": true, 00:14:40.508 "compare_and_write": true, 00:14:40.508 "flush": true, 00:14:40.508 "nvme_admin": true, 00:14:40.508 "nvme_io": true, 00:14:40.508 "read": true, 00:14:40.508 "reset": true, 00:14:40.508 "unmap": true, 00:14:40.508 "write": true, 00:14:40.508 "write_zeroes": true 00:14:40.508 }, 00:14:40.508 "uuid": "b53d6d54-b648-48af-8261-8c8275c59121", 00:14:40.508 "zoned": false 00:14:40.508 } 00:14:40.508 ] 00:14:40.508 13:28:46 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=83844 00:14:40.508 13:28:46 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:40.508 13:28:46 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:40.508 Running I/O for 10 seconds... 00:14:41.883 Latency(us) 00:14:41.883 [2024-12-15T13:28:47.573Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.883 [2024-12-15T13:28:47.573Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.883 Nvme0n1 : 1.00 7570.00 29.57 0.00 0.00 0.00 0.00 0.00 00:14:41.883 [2024-12-15T13:28:47.573Z] =================================================================================================================== 00:14:41.883 [2024-12-15T13:28:47.573Z] Total : 7570.00 29.57 0.00 0.00 0.00 0.00 0.00 00:14:41.883 00:14:42.450 13:28:48 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 33300c28-5684-4bf7-b45f-91549ee84e60 00:14:42.709 [2024-12-15T13:28:48.399Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.709 Nvme0n1 : 2.00 7660.00 29.92 0.00 0.00 0.00 0.00 0.00 00:14:42.709 [2024-12-15T13:28:48.399Z] =================================================================================================================== 00:14:42.709 [2024-12-15T13:28:48.399Z] Total : 7660.00 29.92 0.00 0.00 0.00 0.00 0.00 00:14:42.709 00:14:42.968 true 00:14:42.968 13:28:48 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33300c28-5684-4bf7-b45f-91549ee84e60 00:14:42.968 13:28:48 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:43.227 13:28:48 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:43.227 13:28:48 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:43.227 13:28:48 -- target/nvmf_lvs_grow.sh@65 -- # wait 83844 00:14:43.794 [2024-12-15T13:28:49.484Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.794 Nvme0n1 : 3.00 7633.33 29.82 0.00 0.00 0.00 0.00 0.00 00:14:43.794 [2024-12-15T13:28:49.484Z] =================================================================================================================== 00:14:43.794 [2024-12-15T13:28:49.484Z] Total : 7633.33 29.82 0.00 0.00 0.00 0.00 0.00 00:14:43.794 00:14:44.730 [2024-12-15T13:28:50.420Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:44.730 Nvme0n1 : 4.00 7572.75 29.58 0.00 0.00 0.00 0.00 0.00 00:14:44.730 [2024-12-15T13:28:50.420Z] =================================================================================================================== 00:14:44.730 [2024-12-15T13:28:50.420Z] Total : 7572.75 29.58 0.00 0.00 0.00 0.00 0.00 00:14:44.730 00:14:45.674 [2024-12-15T13:28:51.364Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.674 Nvme0n1 : 5.00 7525.00 29.39 0.00 0.00 0.00 0.00 0.00 00:14:45.674 [2024-12-15T13:28:51.364Z] =================================================================================================================== 00:14:45.674 [2024-12-15T13:28:51.364Z] Total : 7525.00 29.39 0.00 0.00 0.00 0.00 0.00 00:14:45.674 00:14:46.647 [2024-12-15T13:28:52.337Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:46.647 Nvme0n1 : 6.00 7486.83 29.25 0.00 0.00 0.00 0.00 0.00 00:14:46.647 [2024-12-15T13:28:52.337Z] =================================================================================================================== 00:14:46.647 [2024-12-15T13:28:52.337Z] Total : 7486.83 29.25 0.00 0.00 0.00 0.00 0.00 00:14:46.647 00:14:47.583 [2024-12-15T13:28:53.273Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.583 Nvme0n1 : 7.00 7458.43 29.13 0.00 0.00 0.00 0.00 0.00 00:14:47.583 [2024-12-15T13:28:53.273Z] =================================================================================================================== 00:14:47.583 [2024-12-15T13:28:53.273Z] Total : 7458.43 29.13 0.00 0.00 0.00 0.00 0.00 00:14:47.583 00:14:48.518 [2024-12-15T13:28:54.208Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:48.518 Nvme0n1 : 8.00 7353.75 28.73 0.00 0.00 0.00 0.00 0.00 00:14:48.518 [2024-12-15T13:28:54.208Z] =================================================================================================================== 00:14:48.518 [2024-12-15T13:28:54.208Z] Total : 7353.75 28.73 0.00 0.00 0.00 0.00 0.00 00:14:48.518 00:14:49.894 [2024-12-15T13:28:55.584Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:49.894 Nvme0n1 : 9.00 7332.00 28.64 0.00 0.00 0.00 0.00 0.00 00:14:49.894 [2024-12-15T13:28:55.584Z] =================================================================================================================== 00:14:49.894 [2024-12-15T13:28:55.584Z] Total : 7332.00 28.64 0.00 0.00 0.00 0.00 0.00 00:14:49.894 00:14:50.829 [2024-12-15T13:28:56.519Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.829 Nvme0n1 : 10.00 7328.90 28.63 0.00 0.00 0.00 0.00 0.00 00:14:50.829 [2024-12-15T13:28:56.519Z] =================================================================================================================== 00:14:50.829 [2024-12-15T13:28:56.519Z] Total : 7328.90 28.63 0.00 0.00 0.00 0.00 0.00 00:14:50.829 00:14:50.829 00:14:50.829 Latency(us) 00:14:50.829 [2024-12-15T13:28:56.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.829 [2024-12-15T13:28:56.519Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.829 Nvme0n1 : 10.02 7330.46 28.63 0.00 0.00 17456.64 5510.98 104857.60 00:14:50.829 [2024-12-15T13:28:56.519Z] =================================================================================================================== 00:14:50.829 [2024-12-15T13:28:56.519Z] Total : 7330.46 28.63 0.00 0.00 17456.64 5510.98 104857.60 00:14:50.829 0 00:14:50.829 13:28:56 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 83797 00:14:50.829 13:28:56 -- common/autotest_common.sh@936 -- # '[' -z 83797 ']' 00:14:50.829 13:28:56 -- common/autotest_common.sh@940 -- # kill -0 83797 00:14:50.829 13:28:56 -- common/autotest_common.sh@941 -- # uname 00:14:50.829 13:28:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:50.829 13:28:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83797 00:14:50.829 13:28:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:50.829 13:28:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:50.829 killing process with pid 83797 00:14:50.829 13:28:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83797' 00:14:50.829 13:28:56 -- common/autotest_common.sh@955 -- # kill 83797 00:14:50.829 Received shutdown signal, test time was about 10.000000 seconds 00:14:50.829 00:14:50.829 Latency(us) 00:14:50.829 [2024-12-15T13:28:56.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.829 [2024-12-15T13:28:56.519Z] =================================================================================================================== 00:14:50.829 [2024-12-15T13:28:56.519Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:50.829 13:28:56 -- common/autotest_common.sh@960 -- # wait 83797 00:14:50.829 13:28:56 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:51.088 13:28:56 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:51.088 13:28:56 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33300c28-5684-4bf7-b45f-91549ee84e60 00:14:51.346 13:28:56 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:51.346 13:28:56 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:51.346 13:28:56 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:51.604 [2024-12-15 13:28:57.238258] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:51.604 13:28:57 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33300c28-5684-4bf7-b45f-91549ee84e60 00:14:51.604 13:28:57 -- common/autotest_common.sh@650 -- # local es=0 00:14:51.604 13:28:57 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33300c28-5684-4bf7-b45f-91549ee84e60 00:14:51.604 13:28:57 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:51.604 13:28:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:51.604 13:28:57 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:51.604 13:28:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:51.604 13:28:57 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:51.604 13:28:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:51.604 13:28:57 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:51.604 13:28:57 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:51.604 13:28:57 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33300c28-5684-4bf7-b45f-91549ee84e60 00:14:51.863 2024/12/15 13:28:57 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:33300c28-5684-4bf7-b45f-91549ee84e60], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:51.863 request: 00:14:51.863 { 00:14:51.863 "method": "bdev_lvol_get_lvstores", 00:14:51.863 "params": { 00:14:51.863 "uuid": "33300c28-5684-4bf7-b45f-91549ee84e60" 00:14:51.863 } 00:14:51.863 } 00:14:51.863 Got JSON-RPC error response 00:14:51.863 GoRPCClient: error on JSON-RPC call 00:14:51.863 13:28:57 -- common/autotest_common.sh@653 -- # es=1 00:14:51.863 13:28:57 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:51.863 13:28:57 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:51.863 13:28:57 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:51.863 13:28:57 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:52.121 aio_bdev 00:14:52.121 13:28:57 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev b53d6d54-b648-48af-8261-8c8275c59121 00:14:52.121 13:28:57 -- common/autotest_common.sh@897 -- # local bdev_name=b53d6d54-b648-48af-8261-8c8275c59121 00:14:52.121 13:28:57 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:52.121 13:28:57 -- common/autotest_common.sh@899 -- # local i 00:14:52.121 13:28:57 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:52.121 13:28:57 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:52.121 13:28:57 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:52.379 13:28:57 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b53d6d54-b648-48af-8261-8c8275c59121 -t 2000 00:14:52.638 [ 00:14:52.638 { 00:14:52.638 "aliases": [ 00:14:52.638 "lvs/lvol" 00:14:52.638 ], 00:14:52.638 "assigned_rate_limits": { 00:14:52.638 "r_mbytes_per_sec": 0, 00:14:52.638 "rw_ios_per_sec": 0, 00:14:52.638 "rw_mbytes_per_sec": 0, 00:14:52.638 "w_mbytes_per_sec": 0 00:14:52.638 }, 00:14:52.638 "block_size": 4096, 00:14:52.638 "claimed": false, 00:14:52.638 "driver_specific": { 00:14:52.638 "lvol": { 00:14:52.638 "base_bdev": "aio_bdev", 00:14:52.638 "clone": false, 00:14:52.638 "esnap_clone": false, 00:14:52.638 "lvol_store_uuid": "33300c28-5684-4bf7-b45f-91549ee84e60", 00:14:52.638 "snapshot": false, 00:14:52.638 "thin_provision": false 00:14:52.638 } 00:14:52.638 }, 00:14:52.638 "name": "b53d6d54-b648-48af-8261-8c8275c59121", 00:14:52.638 "num_blocks": 38912, 00:14:52.638 "product_name": "Logical Volume", 00:14:52.638 "supported_io_types": { 00:14:52.638 "abort": false, 00:14:52.638 "compare": false, 00:14:52.638 "compare_and_write": false, 00:14:52.638 "flush": false, 00:14:52.638 "nvme_admin": false, 00:14:52.638 "nvme_io": false, 00:14:52.638 "read": true, 00:14:52.638 "reset": true, 00:14:52.638 "unmap": true, 00:14:52.638 "write": true, 00:14:52.638 "write_zeroes": true 00:14:52.638 }, 00:14:52.638 "uuid": "b53d6d54-b648-48af-8261-8c8275c59121", 00:14:52.638 "zoned": false 00:14:52.638 } 00:14:52.638 ] 00:14:52.638 13:28:58 -- common/autotest_common.sh@905 -- # return 0 00:14:52.638 13:28:58 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33300c28-5684-4bf7-b45f-91549ee84e60 00:14:52.638 13:28:58 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:52.896 13:28:58 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:52.896 13:28:58 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33300c28-5684-4bf7-b45f-91549ee84e60 00:14:52.896 13:28:58 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:53.155 13:28:58 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:53.155 13:28:58 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b53d6d54-b648-48af-8261-8c8275c59121 00:14:53.413 13:28:59 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 33300c28-5684-4bf7-b45f-91549ee84e60 00:14:53.671 13:28:59 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:53.930 13:28:59 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:54.498 ************************************ 00:14:54.498 END TEST lvs_grow_clean 00:14:54.498 ************************************ 00:14:54.498 00:14:54.498 real 0m17.862s 00:14:54.498 user 0m17.218s 00:14:54.498 sys 0m2.019s 00:14:54.498 13:28:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:54.498 13:28:59 -- common/autotest_common.sh@10 -- # set +x 00:14:54.498 13:28:59 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:54.498 13:28:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:54.498 13:28:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:54.498 13:28:59 -- common/autotest_common.sh@10 -- # set +x 00:14:54.498 ************************************ 00:14:54.498 START TEST lvs_grow_dirty 00:14:54.498 ************************************ 00:14:54.498 13:28:59 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:14:54.498 13:28:59 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:54.498 13:28:59 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:54.498 13:28:59 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:54.498 13:28:59 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:54.498 13:28:59 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:54.498 13:28:59 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:54.498 13:28:59 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:54.498 13:28:59 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:54.498 13:28:59 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:54.756 13:29:00 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:54.756 13:29:00 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:55.015 13:29:00 -- target/nvmf_lvs_grow.sh@28 -- # lvs=d90f5c51-e251-457a-990b-3c8c72f0a1f8 00:14:55.015 13:29:00 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d90f5c51-e251-457a-990b-3c8c72f0a1f8 00:14:55.015 13:29:00 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:55.015 13:29:00 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:55.015 13:29:00 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:55.015 13:29:00 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d90f5c51-e251-457a-990b-3c8c72f0a1f8 lvol 150 00:14:55.274 13:29:00 -- target/nvmf_lvs_grow.sh@33 -- # lvol=6fa4bbbc-6f77-499c-8373-13fbc4ba4a3d 00:14:55.274 13:29:00 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:55.274 13:29:00 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:55.532 [2024-12-15 13:29:01.131456] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:55.532 [2024-12-15 13:29:01.131548] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:55.532 true 00:14:55.532 13:29:01 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d90f5c51-e251-457a-990b-3c8c72f0a1f8 00:14:55.532 13:29:01 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:55.791 13:29:01 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:55.791 13:29:01 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:56.050 13:29:01 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6fa4bbbc-6f77-499c-8373-13fbc4ba4a3d 00:14:56.309 13:29:01 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:56.567 13:29:02 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:56.825 13:29:02 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=84227 00:14:56.825 13:29:02 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:56.825 13:29:02 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 84227 /var/tmp/bdevperf.sock 00:14:56.825 13:29:02 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:56.825 13:29:02 -- common/autotest_common.sh@829 -- # '[' -z 84227 ']' 00:14:56.825 13:29:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:56.825 13:29:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:56.825 13:29:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:56.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:56.825 13:29:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:56.825 13:29:02 -- common/autotest_common.sh@10 -- # set +x 00:14:56.825 [2024-12-15 13:29:02.364933] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:56.825 [2024-12-15 13:29:02.365043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84227 ] 00:14:56.825 [2024-12-15 13:29:02.508374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.083 [2024-12-15 13:29:02.571091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.649 13:29:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:57.649 13:29:03 -- common/autotest_common.sh@862 -- # return 0 00:14:57.649 13:29:03 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:57.907 Nvme0n1 00:14:57.907 13:29:03 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:58.166 [ 00:14:58.166 { 00:14:58.166 "aliases": [ 00:14:58.166 "6fa4bbbc-6f77-499c-8373-13fbc4ba4a3d" 00:14:58.166 ], 00:14:58.166 "assigned_rate_limits": { 00:14:58.166 "r_mbytes_per_sec": 0, 00:14:58.166 "rw_ios_per_sec": 0, 00:14:58.166 "rw_mbytes_per_sec": 0, 00:14:58.166 "w_mbytes_per_sec": 0 00:14:58.166 }, 00:14:58.166 "block_size": 4096, 00:14:58.166 "claimed": false, 00:14:58.166 "driver_specific": { 00:14:58.166 "mp_policy": "active_passive", 00:14:58.166 "nvme": [ 00:14:58.166 { 00:14:58.166 "ctrlr_data": { 00:14:58.166 "ana_reporting": false, 00:14:58.166 "cntlid": 1, 00:14:58.166 "firmware_revision": "24.01.1", 00:14:58.166 "model_number": "SPDK bdev Controller", 00:14:58.166 "multi_ctrlr": true, 00:14:58.166 "oacs": { 00:14:58.166 "firmware": 0, 00:14:58.166 "format": 0, 00:14:58.166 "ns_manage": 0, 00:14:58.166 "security": 0 00:14:58.166 }, 00:14:58.166 "serial_number": "SPDK0", 00:14:58.166 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:58.166 "vendor_id": "0x8086" 00:14:58.166 }, 00:14:58.166 "ns_data": { 00:14:58.166 "can_share": true, 00:14:58.166 "id": 1 00:14:58.166 }, 00:14:58.166 "trid": { 00:14:58.166 "adrfam": "IPv4", 00:14:58.166 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:58.166 "traddr": "10.0.0.2", 00:14:58.166 "trsvcid": "4420", 00:14:58.166 "trtype": "TCP" 00:14:58.166 }, 00:14:58.166 "vs": { 00:14:58.166 "nvme_version": "1.3" 00:14:58.166 } 00:14:58.166 } 00:14:58.166 ] 00:14:58.166 }, 00:14:58.166 "name": "Nvme0n1", 00:14:58.166 "num_blocks": 38912, 00:14:58.166 "product_name": "NVMe disk", 00:14:58.166 "supported_io_types": { 00:14:58.166 "abort": true, 00:14:58.166 "compare": true, 00:14:58.166 "compare_and_write": true, 00:14:58.166 "flush": true, 00:14:58.166 "nvme_admin": true, 00:14:58.166 "nvme_io": true, 00:14:58.166 "read": true, 00:14:58.166 "reset": true, 00:14:58.166 "unmap": true, 00:14:58.166 "write": true, 00:14:58.166 "write_zeroes": true 00:14:58.166 }, 00:14:58.166 "uuid": "6fa4bbbc-6f77-499c-8373-13fbc4ba4a3d", 00:14:58.166 "zoned": false 00:14:58.166 } 00:14:58.166 ] 00:14:58.166 13:29:03 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:58.166 13:29:03 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=84279 00:14:58.166 13:29:03 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:58.424 Running I/O for 10 seconds... 00:14:59.359 Latency(us) 00:14:59.359 [2024-12-15T13:29:05.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.359 [2024-12-15T13:29:05.049Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:59.359 Nvme0n1 : 1.00 7551.00 29.50 0.00 0.00 0.00 0.00 0.00 00:14:59.359 [2024-12-15T13:29:05.049Z] =================================================================================================================== 00:14:59.359 [2024-12-15T13:29:05.049Z] Total : 7551.00 29.50 0.00 0.00 0.00 0.00 0.00 00:14:59.359 00:15:00.293 13:29:05 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d90f5c51-e251-457a-990b-3c8c72f0a1f8 00:15:00.293 [2024-12-15T13:29:05.983Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:00.293 Nvme0n1 : 2.00 7577.50 29.60 0.00 0.00 0.00 0.00 0.00 00:15:00.293 [2024-12-15T13:29:05.983Z] =================================================================================================================== 00:15:00.293 [2024-12-15T13:29:05.983Z] Total : 7577.50 29.60 0.00 0.00 0.00 0.00 0.00 00:15:00.293 00:15:00.551 true 00:15:00.551 13:29:06 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d90f5c51-e251-457a-990b-3c8c72f0a1f8 00:15:00.551 13:29:06 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:00.810 13:29:06 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:00.810 13:29:06 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:00.810 13:29:06 -- target/nvmf_lvs_grow.sh@65 -- # wait 84279 00:15:01.377 [2024-12-15T13:29:07.067Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:01.377 Nvme0n1 : 3.00 7604.00 29.70 0.00 0.00 0.00 0.00 0.00 00:15:01.377 [2024-12-15T13:29:07.067Z] =================================================================================================================== 00:15:01.377 [2024-12-15T13:29:07.067Z] Total : 7604.00 29.70 0.00 0.00 0.00 0.00 0.00 00:15:01.377 00:15:02.313 [2024-12-15T13:29:08.003Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:02.313 Nvme0n1 : 4.00 7622.00 29.77 0.00 0.00 0.00 0.00 0.00 00:15:02.313 [2024-12-15T13:29:08.003Z] =================================================================================================================== 00:15:02.313 [2024-12-15T13:29:08.003Z] Total : 7622.00 29.77 0.00 0.00 0.00 0.00 0.00 00:15:02.313 00:15:03.249 [2024-12-15T13:29:08.939Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:03.250 Nvme0n1 : 5.00 7619.60 29.76 0.00 0.00 0.00 0.00 0.00 00:15:03.250 [2024-12-15T13:29:08.940Z] =================================================================================================================== 00:15:03.250 [2024-12-15T13:29:08.940Z] Total : 7619.60 29.76 0.00 0.00 0.00 0.00 0.00 00:15:03.250 00:15:04.186 [2024-12-15T13:29:09.876Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:04.186 Nvme0n1 : 6.00 7459.83 29.14 0.00 0.00 0.00 0.00 0.00 00:15:04.186 [2024-12-15T13:29:09.876Z] =================================================================================================================== 00:15:04.186 [2024-12-15T13:29:09.876Z] Total : 7459.83 29.14 0.00 0.00 0.00 0.00 0.00 00:15:04.186 00:15:05.562 [2024-12-15T13:29:11.253Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:05.563 Nvme0n1 : 7.00 7458.71 29.14 0.00 0.00 0.00 0.00 0.00 00:15:05.563 [2024-12-15T13:29:11.253Z] =================================================================================================================== 00:15:05.563 [2024-12-15T13:29:11.253Z] Total : 7458.71 29.14 0.00 0.00 0.00 0.00 0.00 00:15:05.563 00:15:06.499 [2024-12-15T13:29:12.189Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:06.499 Nvme0n1 : 8.00 7403.12 28.92 0.00 0.00 0.00 0.00 0.00 00:15:06.499 [2024-12-15T13:29:12.189Z] =================================================================================================================== 00:15:06.499 [2024-12-15T13:29:12.189Z] Total : 7403.12 28.92 0.00 0.00 0.00 0.00 0.00 00:15:06.499 00:15:07.435 [2024-12-15T13:29:13.125Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:07.435 Nvme0n1 : 9.00 7385.67 28.85 0.00 0.00 0.00 0.00 0.00 00:15:07.435 [2024-12-15T13:29:13.125Z] =================================================================================================================== 00:15:07.435 [2024-12-15T13:29:13.125Z] Total : 7385.67 28.85 0.00 0.00 0.00 0.00 0.00 00:15:07.435 00:15:08.371 [2024-12-15T13:29:14.061Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:08.371 Nvme0n1 : 10.00 7360.50 28.75 0.00 0.00 0.00 0.00 0.00 00:15:08.371 [2024-12-15T13:29:14.061Z] =================================================================================================================== 00:15:08.371 [2024-12-15T13:29:14.061Z] Total : 7360.50 28.75 0.00 0.00 0.00 0.00 0.00 00:15:08.371 00:15:08.371 00:15:08.371 Latency(us) 00:15:08.371 [2024-12-15T13:29:14.061Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.371 [2024-12-15T13:29:14.061Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:08.371 Nvme0n1 : 10.01 7368.63 28.78 0.00 0.00 17363.54 5421.61 154426.65 00:15:08.371 [2024-12-15T13:29:14.061Z] =================================================================================================================== 00:15:08.371 [2024-12-15T13:29:14.061Z] Total : 7368.63 28.78 0.00 0.00 17363.54 5421.61 154426.65 00:15:08.371 0 00:15:08.371 13:29:13 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 84227 00:15:08.371 13:29:13 -- common/autotest_common.sh@936 -- # '[' -z 84227 ']' 00:15:08.371 13:29:13 -- common/autotest_common.sh@940 -- # kill -0 84227 00:15:08.371 13:29:13 -- common/autotest_common.sh@941 -- # uname 00:15:08.371 13:29:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:08.371 13:29:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84227 00:15:08.371 13:29:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:08.371 13:29:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:08.371 killing process with pid 84227 00:15:08.371 13:29:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84227' 00:15:08.371 Received shutdown signal, test time was about 10.000000 seconds 00:15:08.371 00:15:08.371 Latency(us) 00:15:08.371 [2024-12-15T13:29:14.061Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.371 [2024-12-15T13:29:14.061Z] =================================================================================================================== 00:15:08.371 [2024-12-15T13:29:14.061Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:08.371 13:29:13 -- common/autotest_common.sh@955 -- # kill 84227 00:15:08.371 13:29:13 -- common/autotest_common.sh@960 -- # wait 84227 00:15:08.638 13:29:14 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:08.922 13:29:14 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d90f5c51-e251-457a-990b-3c8c72f0a1f8 00:15:08.922 13:29:14 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:09.196 13:29:14 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:09.196 13:29:14 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:15:09.196 13:29:14 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 83635 00:15:09.196 13:29:14 -- target/nvmf_lvs_grow.sh@74 -- # wait 83635 00:15:09.196 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 83635 Killed "${NVMF_APP[@]}" "$@" 00:15:09.196 13:29:14 -- target/nvmf_lvs_grow.sh@74 -- # true 00:15:09.196 13:29:14 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:15:09.196 13:29:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:09.196 13:29:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:09.196 13:29:14 -- common/autotest_common.sh@10 -- # set +x 00:15:09.196 13:29:14 -- nvmf/common.sh@469 -- # nvmfpid=84425 00:15:09.196 13:29:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:09.196 13:29:14 -- nvmf/common.sh@470 -- # waitforlisten 84425 00:15:09.196 13:29:14 -- common/autotest_common.sh@829 -- # '[' -z 84425 ']' 00:15:09.196 13:29:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.196 13:29:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:09.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.196 13:29:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.196 13:29:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:09.196 13:29:14 -- common/autotest_common.sh@10 -- # set +x 00:15:09.196 [2024-12-15 13:29:14.780653] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:09.196 [2024-12-15 13:29:14.780754] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.454 [2024-12-15 13:29:14.907027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.454 [2024-12-15 13:29:14.968930] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:09.454 [2024-12-15 13:29:14.969077] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:09.454 [2024-12-15 13:29:14.969089] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:09.454 [2024-12-15 13:29:14.969096] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:09.454 [2024-12-15 13:29:14.969120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.390 13:29:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:10.390 13:29:15 -- common/autotest_common.sh@862 -- # return 0 00:15:10.390 13:29:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:10.390 13:29:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:10.390 13:29:15 -- common/autotest_common.sh@10 -- # set +x 00:15:10.390 13:29:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:10.390 13:29:15 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:10.390 [2024-12-15 13:29:16.025359] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:10.390 [2024-12-15 13:29:16.025785] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:10.390 [2024-12-15 13:29:16.026037] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:10.390 13:29:16 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:15:10.390 13:29:16 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 6fa4bbbc-6f77-499c-8373-13fbc4ba4a3d 00:15:10.390 13:29:16 -- common/autotest_common.sh@897 -- # local bdev_name=6fa4bbbc-6f77-499c-8373-13fbc4ba4a3d 00:15:10.390 13:29:16 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:10.390 13:29:16 -- common/autotest_common.sh@899 -- # local i 00:15:10.390 13:29:16 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:10.390 13:29:16 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:10.390 13:29:16 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:10.649 13:29:16 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6fa4bbbc-6f77-499c-8373-13fbc4ba4a3d -t 2000 00:15:10.908 [ 00:15:10.908 { 00:15:10.908 "aliases": [ 00:15:10.908 "lvs/lvol" 00:15:10.908 ], 00:15:10.908 "assigned_rate_limits": { 00:15:10.908 "r_mbytes_per_sec": 0, 00:15:10.908 "rw_ios_per_sec": 0, 00:15:10.908 "rw_mbytes_per_sec": 0, 00:15:10.908 "w_mbytes_per_sec": 0 00:15:10.908 }, 00:15:10.908 "block_size": 4096, 00:15:10.908 "claimed": false, 00:15:10.908 "driver_specific": { 00:15:10.908 "lvol": { 00:15:10.908 "base_bdev": "aio_bdev", 00:15:10.908 "clone": false, 00:15:10.908 "esnap_clone": false, 00:15:10.908 "lvol_store_uuid": "d90f5c51-e251-457a-990b-3c8c72f0a1f8", 00:15:10.908 "snapshot": false, 00:15:10.908 "thin_provision": false 00:15:10.908 } 00:15:10.908 }, 00:15:10.908 "name": "6fa4bbbc-6f77-499c-8373-13fbc4ba4a3d", 00:15:10.908 "num_blocks": 38912, 00:15:10.908 "product_name": "Logical Volume", 00:15:10.908 "supported_io_types": { 00:15:10.908 "abort": false, 00:15:10.908 "compare": false, 00:15:10.908 "compare_and_write": false, 00:15:10.908 "flush": false, 00:15:10.908 "nvme_admin": false, 00:15:10.908 "nvme_io": false, 00:15:10.908 "read": true, 00:15:10.908 "reset": true, 00:15:10.908 "unmap": true, 00:15:10.908 "write": true, 00:15:10.908 "write_zeroes": true 00:15:10.908 }, 00:15:10.908 "uuid": "6fa4bbbc-6f77-499c-8373-13fbc4ba4a3d", 00:15:10.908 "zoned": false 00:15:10.908 } 00:15:10.908 ] 00:15:11.167 13:29:16 -- common/autotest_common.sh@905 -- # return 0 00:15:11.167 13:29:16 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:15:11.167 13:29:16 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d90f5c51-e251-457a-990b-3c8c72f0a1f8 00:15:11.167 13:29:16 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:15:11.167 13:29:16 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d90f5c51-e251-457a-990b-3c8c72f0a1f8 00:15:11.167 13:29:16 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:15:11.426 13:29:17 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:15:11.426 13:29:17 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:11.685 [2024-12-15 13:29:17.194911] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:11.685 13:29:17 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d90f5c51-e251-457a-990b-3c8c72f0a1f8 00:15:11.685 13:29:17 -- common/autotest_common.sh@650 -- # local es=0 00:15:11.685 13:29:17 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d90f5c51-e251-457a-990b-3c8c72f0a1f8 00:15:11.685 13:29:17 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:11.685 13:29:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.685 13:29:17 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:11.685 13:29:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.685 13:29:17 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:11.685 13:29:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.685 13:29:17 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:11.685 13:29:17 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:11.685 13:29:17 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d90f5c51-e251-457a-990b-3c8c72f0a1f8 00:15:11.944 2024/12/15 13:29:17 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:d90f5c51-e251-457a-990b-3c8c72f0a1f8], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:15:11.944 request: 00:15:11.944 { 00:15:11.944 "method": "bdev_lvol_get_lvstores", 00:15:11.944 "params": { 00:15:11.944 "uuid": "d90f5c51-e251-457a-990b-3c8c72f0a1f8" 00:15:11.944 } 00:15:11.944 } 00:15:11.944 Got JSON-RPC error response 00:15:11.944 GoRPCClient: error on JSON-RPC call 00:15:11.944 13:29:17 -- common/autotest_common.sh@653 -- # es=1 00:15:11.944 13:29:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:11.944 13:29:17 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:11.944 13:29:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:11.944 13:29:17 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:12.202 aio_bdev 00:15:12.202 13:29:17 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 6fa4bbbc-6f77-499c-8373-13fbc4ba4a3d 00:15:12.203 13:29:17 -- common/autotest_common.sh@897 -- # local bdev_name=6fa4bbbc-6f77-499c-8373-13fbc4ba4a3d 00:15:12.203 13:29:17 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:12.203 13:29:17 -- common/autotest_common.sh@899 -- # local i 00:15:12.203 13:29:17 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:12.203 13:29:17 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:12.203 13:29:17 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:12.461 13:29:17 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6fa4bbbc-6f77-499c-8373-13fbc4ba4a3d -t 2000 00:15:12.721 [ 00:15:12.721 { 00:15:12.721 "aliases": [ 00:15:12.721 "lvs/lvol" 00:15:12.721 ], 00:15:12.721 "assigned_rate_limits": { 00:15:12.721 "r_mbytes_per_sec": 0, 00:15:12.721 "rw_ios_per_sec": 0, 00:15:12.721 "rw_mbytes_per_sec": 0, 00:15:12.721 "w_mbytes_per_sec": 0 00:15:12.721 }, 00:15:12.721 "block_size": 4096, 00:15:12.721 "claimed": false, 00:15:12.721 "driver_specific": { 00:15:12.721 "lvol": { 00:15:12.721 "base_bdev": "aio_bdev", 00:15:12.721 "clone": false, 00:15:12.721 "esnap_clone": false, 00:15:12.721 "lvol_store_uuid": "d90f5c51-e251-457a-990b-3c8c72f0a1f8", 00:15:12.721 "snapshot": false, 00:15:12.721 "thin_provision": false 00:15:12.721 } 00:15:12.721 }, 00:15:12.721 "name": "6fa4bbbc-6f77-499c-8373-13fbc4ba4a3d", 00:15:12.721 "num_blocks": 38912, 00:15:12.721 "product_name": "Logical Volume", 00:15:12.721 "supported_io_types": { 00:15:12.721 "abort": false, 00:15:12.721 "compare": false, 00:15:12.721 "compare_and_write": false, 00:15:12.721 "flush": false, 00:15:12.721 "nvme_admin": false, 00:15:12.721 "nvme_io": false, 00:15:12.721 "read": true, 00:15:12.721 "reset": true, 00:15:12.721 "unmap": true, 00:15:12.721 "write": true, 00:15:12.721 "write_zeroes": true 00:15:12.721 }, 00:15:12.721 "uuid": "6fa4bbbc-6f77-499c-8373-13fbc4ba4a3d", 00:15:12.721 "zoned": false 00:15:12.721 } 00:15:12.721 ] 00:15:12.721 13:29:18 -- common/autotest_common.sh@905 -- # return 0 00:15:12.721 13:29:18 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:12.721 13:29:18 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d90f5c51-e251-457a-990b-3c8c72f0a1f8 00:15:12.721 13:29:18 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:12.721 13:29:18 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d90f5c51-e251-457a-990b-3c8c72f0a1f8 00:15:12.721 13:29:18 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:12.980 13:29:18 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:12.980 13:29:18 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6fa4bbbc-6f77-499c-8373-13fbc4ba4a3d 00:15:13.239 13:29:18 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d90f5c51-e251-457a-990b-3c8c72f0a1f8 00:15:13.497 13:29:19 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:13.756 13:29:19 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:14.015 00:15:14.015 real 0m19.636s 00:15:14.015 user 0m38.900s 00:15:14.015 sys 0m9.239s 00:15:14.015 13:29:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:14.015 13:29:19 -- common/autotest_common.sh@10 -- # set +x 00:15:14.015 ************************************ 00:15:14.015 END TEST lvs_grow_dirty 00:15:14.015 ************************************ 00:15:14.015 13:29:19 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:14.015 13:29:19 -- common/autotest_common.sh@806 -- # type=--id 00:15:14.015 13:29:19 -- common/autotest_common.sh@807 -- # id=0 00:15:14.015 13:29:19 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:14.015 13:29:19 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:14.015 13:29:19 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:14.015 13:29:19 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:14.015 13:29:19 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:14.015 13:29:19 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:14.015 nvmf_trace.0 00:15:14.015 13:29:19 -- common/autotest_common.sh@821 -- # return 0 00:15:14.015 13:29:19 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:14.015 13:29:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:14.015 13:29:19 -- nvmf/common.sh@116 -- # sync 00:15:15.391 13:29:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:15.391 13:29:20 -- nvmf/common.sh@119 -- # set +e 00:15:15.391 13:29:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:15.391 13:29:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:15.391 rmmod nvme_tcp 00:15:15.391 rmmod nvme_fabrics 00:15:15.391 rmmod nvme_keyring 00:15:15.391 13:29:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:15.391 13:29:21 -- nvmf/common.sh@123 -- # set -e 00:15:15.391 13:29:21 -- nvmf/common.sh@124 -- # return 0 00:15:15.391 13:29:21 -- nvmf/common.sh@477 -- # '[' -n 84425 ']' 00:15:15.391 13:29:21 -- nvmf/common.sh@478 -- # killprocess 84425 00:15:15.391 13:29:21 -- common/autotest_common.sh@936 -- # '[' -z 84425 ']' 00:15:15.391 13:29:21 -- common/autotest_common.sh@940 -- # kill -0 84425 00:15:15.391 13:29:21 -- common/autotest_common.sh@941 -- # uname 00:15:15.391 13:29:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:15.391 13:29:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84425 00:15:15.391 13:29:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:15.391 13:29:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:15.391 killing process with pid 84425 00:15:15.391 13:29:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84425' 00:15:15.391 13:29:21 -- common/autotest_common.sh@955 -- # kill 84425 00:15:15.391 13:29:21 -- common/autotest_common.sh@960 -- # wait 84425 00:15:15.650 13:29:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:15.650 13:29:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:15.650 13:29:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:15.650 13:29:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:15.650 13:29:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:15.650 13:29:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.650 13:29:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:15.650 13:29:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.650 13:29:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:15.650 00:15:15.650 real 0m41.018s 00:15:15.650 user 1m3.166s 00:15:15.650 sys 0m13.108s 00:15:15.650 13:29:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:15.650 13:29:21 -- common/autotest_common.sh@10 -- # set +x 00:15:15.650 ************************************ 00:15:15.650 END TEST nvmf_lvs_grow 00:15:15.650 ************************************ 00:15:15.650 13:29:21 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:15.650 13:29:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:15.650 13:29:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:15.650 13:29:21 -- common/autotest_common.sh@10 -- # set +x 00:15:15.650 ************************************ 00:15:15.650 START TEST nvmf_bdev_io_wait 00:15:15.650 ************************************ 00:15:15.650 13:29:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:15.909 * Looking for test storage... 00:15:15.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:15.909 13:29:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:15.909 13:29:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:15.909 13:29:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:15.909 13:29:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:15.909 13:29:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:15.909 13:29:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:15.909 13:29:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:15.909 13:29:21 -- scripts/common.sh@335 -- # IFS=.-: 00:15:15.909 13:29:21 -- scripts/common.sh@335 -- # read -ra ver1 00:15:15.909 13:29:21 -- scripts/common.sh@336 -- # IFS=.-: 00:15:15.909 13:29:21 -- scripts/common.sh@336 -- # read -ra ver2 00:15:15.909 13:29:21 -- scripts/common.sh@337 -- # local 'op=<' 00:15:15.909 13:29:21 -- scripts/common.sh@339 -- # ver1_l=2 00:15:15.909 13:29:21 -- scripts/common.sh@340 -- # ver2_l=1 00:15:15.909 13:29:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:15.909 13:29:21 -- scripts/common.sh@343 -- # case "$op" in 00:15:15.909 13:29:21 -- scripts/common.sh@344 -- # : 1 00:15:15.909 13:29:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:15.909 13:29:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:15.909 13:29:21 -- scripts/common.sh@364 -- # decimal 1 00:15:15.909 13:29:21 -- scripts/common.sh@352 -- # local d=1 00:15:15.909 13:29:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:15.909 13:29:21 -- scripts/common.sh@354 -- # echo 1 00:15:15.909 13:29:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:15.909 13:29:21 -- scripts/common.sh@365 -- # decimal 2 00:15:15.909 13:29:21 -- scripts/common.sh@352 -- # local d=2 00:15:15.909 13:29:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:15.909 13:29:21 -- scripts/common.sh@354 -- # echo 2 00:15:15.909 13:29:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:15.909 13:29:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:15.909 13:29:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:15.909 13:29:21 -- scripts/common.sh@367 -- # return 0 00:15:15.909 13:29:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:15.909 13:29:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:15.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.909 --rc genhtml_branch_coverage=1 00:15:15.909 --rc genhtml_function_coverage=1 00:15:15.909 --rc genhtml_legend=1 00:15:15.909 --rc geninfo_all_blocks=1 00:15:15.909 --rc geninfo_unexecuted_blocks=1 00:15:15.909 00:15:15.909 ' 00:15:15.909 13:29:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:15.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.909 --rc genhtml_branch_coverage=1 00:15:15.909 --rc genhtml_function_coverage=1 00:15:15.909 --rc genhtml_legend=1 00:15:15.909 --rc geninfo_all_blocks=1 00:15:15.909 --rc geninfo_unexecuted_blocks=1 00:15:15.909 00:15:15.909 ' 00:15:15.909 13:29:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:15.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.909 --rc genhtml_branch_coverage=1 00:15:15.909 --rc genhtml_function_coverage=1 00:15:15.909 --rc genhtml_legend=1 00:15:15.909 --rc geninfo_all_blocks=1 00:15:15.909 --rc geninfo_unexecuted_blocks=1 00:15:15.909 00:15:15.909 ' 00:15:15.909 13:29:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:15.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:15.909 --rc genhtml_branch_coverage=1 00:15:15.909 --rc genhtml_function_coverage=1 00:15:15.909 --rc genhtml_legend=1 00:15:15.909 --rc geninfo_all_blocks=1 00:15:15.909 --rc geninfo_unexecuted_blocks=1 00:15:15.909 00:15:15.909 ' 00:15:15.909 13:29:21 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:15.909 13:29:21 -- nvmf/common.sh@7 -- # uname -s 00:15:15.909 13:29:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:15.909 13:29:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:15.909 13:29:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:15.909 13:29:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:15.910 13:29:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:15.910 13:29:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:15.910 13:29:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:15.910 13:29:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:15.910 13:29:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:15.910 13:29:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:15.910 13:29:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:15:15.910 13:29:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:15:15.910 13:29:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:15.910 13:29:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:15.910 13:29:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:15.910 13:29:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:15.910 13:29:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:15.910 13:29:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:15.910 13:29:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:15.910 13:29:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.910 13:29:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.910 13:29:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.910 13:29:21 -- paths/export.sh@5 -- # export PATH 00:15:15.910 13:29:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.910 13:29:21 -- nvmf/common.sh@46 -- # : 0 00:15:15.910 13:29:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:15.910 13:29:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:15.910 13:29:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:15.910 13:29:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:15.910 13:29:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:15.910 13:29:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:15.910 13:29:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:15.910 13:29:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:15.910 13:29:21 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:15.910 13:29:21 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:15.910 13:29:21 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:15.910 13:29:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:15.910 13:29:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:15.910 13:29:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:15.910 13:29:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:15.910 13:29:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:15.910 13:29:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.910 13:29:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:15.910 13:29:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.910 13:29:21 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:15.910 13:29:21 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:15.910 13:29:21 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:15.910 13:29:21 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:15.910 13:29:21 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:15.910 13:29:21 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:15.910 13:29:21 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:15.910 13:29:21 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:15.910 13:29:21 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:15.910 13:29:21 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:15.910 13:29:21 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:15.910 13:29:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:15.910 13:29:21 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:15.910 13:29:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:15.910 13:29:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:15.910 13:29:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:15.910 13:29:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:15.910 13:29:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:15.910 13:29:21 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:15.910 13:29:21 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:15.910 Cannot find device "nvmf_tgt_br" 00:15:15.910 13:29:21 -- nvmf/common.sh@154 -- # true 00:15:15.910 13:29:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:15.910 Cannot find device "nvmf_tgt_br2" 00:15:15.910 13:29:21 -- nvmf/common.sh@155 -- # true 00:15:15.910 13:29:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:15.910 13:29:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:15.910 Cannot find device "nvmf_tgt_br" 00:15:15.910 13:29:21 -- nvmf/common.sh@157 -- # true 00:15:15.910 13:29:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:15.910 Cannot find device "nvmf_tgt_br2" 00:15:15.910 13:29:21 -- nvmf/common.sh@158 -- # true 00:15:15.910 13:29:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:16.169 13:29:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:16.169 13:29:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:16.169 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:16.169 13:29:21 -- nvmf/common.sh@161 -- # true 00:15:16.169 13:29:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:16.169 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:16.169 13:29:21 -- nvmf/common.sh@162 -- # true 00:15:16.169 13:29:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:16.169 13:29:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:16.169 13:29:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:16.169 13:29:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:16.169 13:29:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:16.169 13:29:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:16.169 13:29:21 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:16.169 13:29:21 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:16.169 13:29:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:16.169 13:29:21 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:16.169 13:29:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:16.169 13:29:21 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:16.169 13:29:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:16.169 13:29:21 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:16.169 13:29:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:16.169 13:29:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:16.169 13:29:21 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:16.169 13:29:21 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:16.169 13:29:21 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:16.169 13:29:21 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:16.169 13:29:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:16.169 13:29:21 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:16.169 13:29:21 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:16.169 13:29:21 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:16.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:16.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:15:16.169 00:15:16.169 --- 10.0.0.2 ping statistics --- 00:15:16.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.169 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:15:16.169 13:29:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:16.169 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:16.169 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:15:16.169 00:15:16.169 --- 10.0.0.3 ping statistics --- 00:15:16.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.169 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:15:16.169 13:29:21 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:16.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:16.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:15:16.169 00:15:16.169 --- 10.0.0.1 ping statistics --- 00:15:16.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.169 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:15:16.169 13:29:21 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:16.169 13:29:21 -- nvmf/common.sh@421 -- # return 0 00:15:16.169 13:29:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:16.169 13:29:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:16.169 13:29:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:16.169 13:29:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:16.169 13:29:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:16.169 13:29:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:16.169 13:29:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:16.169 13:29:21 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:16.169 13:29:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:16.169 13:29:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:16.169 13:29:21 -- common/autotest_common.sh@10 -- # set +x 00:15:16.169 13:29:21 -- nvmf/common.sh@469 -- # nvmfpid=84858 00:15:16.169 13:29:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:16.169 13:29:21 -- nvmf/common.sh@470 -- # waitforlisten 84858 00:15:16.169 13:29:21 -- common/autotest_common.sh@829 -- # '[' -z 84858 ']' 00:15:16.169 13:29:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.169 13:29:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:16.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.170 13:29:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.170 13:29:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:16.170 13:29:21 -- common/autotest_common.sh@10 -- # set +x 00:15:16.429 [2024-12-15 13:29:21.891711] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:16.429 [2024-12-15 13:29:21.891807] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:16.429 [2024-12-15 13:29:22.027906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:16.429 [2024-12-15 13:29:22.091671] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:16.429 [2024-12-15 13:29:22.091817] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:16.429 [2024-12-15 13:29:22.091829] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:16.429 [2024-12-15 13:29:22.091837] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:16.429 [2024-12-15 13:29:22.091945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.429 [2024-12-15 13:29:22.092447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:16.429 [2024-12-15 13:29:22.092897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:16.429 [2024-12-15 13:29:22.092909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.365 13:29:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:17.365 13:29:22 -- common/autotest_common.sh@862 -- # return 0 00:15:17.365 13:29:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:17.365 13:29:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:17.365 13:29:22 -- common/autotest_common.sh@10 -- # set +x 00:15:17.365 13:29:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:17.365 13:29:22 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:17.365 13:29:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.366 13:29:22 -- common/autotest_common.sh@10 -- # set +x 00:15:17.366 13:29:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.366 13:29:22 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:17.366 13:29:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.366 13:29:22 -- common/autotest_common.sh@10 -- # set +x 00:15:17.366 13:29:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.366 13:29:22 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:17.366 13:29:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.366 13:29:22 -- common/autotest_common.sh@10 -- # set +x 00:15:17.366 [2024-12-15 13:29:22.967344] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:17.366 13:29:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.366 13:29:22 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:17.366 13:29:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.366 13:29:22 -- common/autotest_common.sh@10 -- # set +x 00:15:17.366 Malloc0 00:15:17.366 13:29:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.366 13:29:22 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:17.366 13:29:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.366 13:29:23 -- common/autotest_common.sh@10 -- # set +x 00:15:17.366 13:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.366 13:29:23 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:17.366 13:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.366 13:29:23 -- common/autotest_common.sh@10 -- # set +x 00:15:17.366 13:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.366 13:29:23 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:17.366 13:29:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.366 13:29:23 -- common/autotest_common.sh@10 -- # set +x 00:15:17.366 [2024-12-15 13:29:23.023153] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:17.366 13:29:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.366 13:29:23 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=84911 00:15:17.366 13:29:23 -- target/bdev_io_wait.sh@30 -- # READ_PID=84913 00:15:17.366 13:29:23 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:17.366 13:29:23 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:17.366 13:29:23 -- nvmf/common.sh@520 -- # config=() 00:15:17.366 13:29:23 -- nvmf/common.sh@520 -- # local subsystem config 00:15:17.366 13:29:23 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=84915 00:15:17.366 13:29:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:17.366 13:29:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:17.366 { 00:15:17.366 "params": { 00:15:17.366 "name": "Nvme$subsystem", 00:15:17.366 "trtype": "$TEST_TRANSPORT", 00:15:17.366 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:17.366 "adrfam": "ipv4", 00:15:17.366 "trsvcid": "$NVMF_PORT", 00:15:17.366 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:17.366 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:17.366 "hdgst": ${hdgst:-false}, 00:15:17.366 "ddgst": ${ddgst:-false} 00:15:17.366 }, 00:15:17.366 "method": "bdev_nvme_attach_controller" 00:15:17.366 } 00:15:17.366 EOF 00:15:17.366 )") 00:15:17.366 13:29:23 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:17.366 13:29:23 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=84917 00:15:17.366 13:29:23 -- target/bdev_io_wait.sh@35 -- # sync 00:15:17.366 13:29:23 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:17.366 13:29:23 -- nvmf/common.sh@542 -- # cat 00:15:17.366 13:29:23 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:17.366 13:29:23 -- nvmf/common.sh@520 -- # config=() 00:15:17.366 13:29:23 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:17.366 13:29:23 -- nvmf/common.sh@520 -- # local subsystem config 00:15:17.366 13:29:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:17.366 13:29:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:17.366 { 00:15:17.366 "params": { 00:15:17.366 "name": "Nvme$subsystem", 00:15:17.366 "trtype": "$TEST_TRANSPORT", 00:15:17.366 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:17.366 "adrfam": "ipv4", 00:15:17.366 "trsvcid": "$NVMF_PORT", 00:15:17.366 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:17.366 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:17.366 "hdgst": ${hdgst:-false}, 00:15:17.366 "ddgst": ${ddgst:-false} 00:15:17.366 }, 00:15:17.366 "method": "bdev_nvme_attach_controller" 00:15:17.366 } 00:15:17.366 EOF 00:15:17.366 )") 00:15:17.366 13:29:23 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:17.366 13:29:23 -- nvmf/common.sh@520 -- # config=() 00:15:17.366 13:29:23 -- nvmf/common.sh@520 -- # local subsystem config 00:15:17.366 13:29:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:17.366 13:29:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:17.366 { 00:15:17.366 "params": { 00:15:17.366 "name": "Nvme$subsystem", 00:15:17.366 "trtype": "$TEST_TRANSPORT", 00:15:17.366 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:17.366 "adrfam": "ipv4", 00:15:17.366 "trsvcid": "$NVMF_PORT", 00:15:17.366 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:17.366 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:17.366 "hdgst": ${hdgst:-false}, 00:15:17.366 "ddgst": ${ddgst:-false} 00:15:17.366 }, 00:15:17.366 "method": "bdev_nvme_attach_controller" 00:15:17.366 } 00:15:17.366 EOF 00:15:17.366 )") 00:15:17.366 13:29:23 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:17.366 13:29:23 -- nvmf/common.sh@520 -- # config=() 00:15:17.366 13:29:23 -- nvmf/common.sh@520 -- # local subsystem config 00:15:17.366 13:29:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:17.366 13:29:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:17.366 { 00:15:17.366 "params": { 00:15:17.366 "name": "Nvme$subsystem", 00:15:17.366 "trtype": "$TEST_TRANSPORT", 00:15:17.366 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:17.366 "adrfam": "ipv4", 00:15:17.366 "trsvcid": "$NVMF_PORT", 00:15:17.366 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:17.366 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:17.366 "hdgst": ${hdgst:-false}, 00:15:17.366 "ddgst": ${ddgst:-false} 00:15:17.366 }, 00:15:17.366 "method": "bdev_nvme_attach_controller" 00:15:17.366 } 00:15:17.366 EOF 00:15:17.366 )") 00:15:17.366 13:29:23 -- nvmf/common.sh@542 -- # cat 00:15:17.366 13:29:23 -- nvmf/common.sh@544 -- # jq . 00:15:17.366 13:29:23 -- nvmf/common.sh@542 -- # cat 00:15:17.366 13:29:23 -- nvmf/common.sh@542 -- # cat 00:15:17.366 13:29:23 -- nvmf/common.sh@544 -- # jq . 00:15:17.366 13:29:23 -- nvmf/common.sh@545 -- # IFS=, 00:15:17.366 13:29:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:17.366 "params": { 00:15:17.366 "name": "Nvme1", 00:15:17.366 "trtype": "tcp", 00:15:17.366 "traddr": "10.0.0.2", 00:15:17.366 "adrfam": "ipv4", 00:15:17.366 "trsvcid": "4420", 00:15:17.366 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:17.366 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:17.366 "hdgst": false, 00:15:17.366 "ddgst": false 00:15:17.366 }, 00:15:17.366 "method": "bdev_nvme_attach_controller" 00:15:17.366 }' 00:15:17.366 13:29:23 -- nvmf/common.sh@544 -- # jq . 00:15:17.366 13:29:23 -- nvmf/common.sh@545 -- # IFS=, 00:15:17.366 13:29:23 -- nvmf/common.sh@545 -- # IFS=, 00:15:17.366 13:29:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:17.366 "params": { 00:15:17.366 "name": "Nvme1", 00:15:17.366 "trtype": "tcp", 00:15:17.366 "traddr": "10.0.0.2", 00:15:17.366 "adrfam": "ipv4", 00:15:17.366 "trsvcid": "4420", 00:15:17.366 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:17.366 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:17.366 "hdgst": false, 00:15:17.366 "ddgst": false 00:15:17.366 }, 00:15:17.366 "method": "bdev_nvme_attach_controller" 00:15:17.366 }' 00:15:17.366 13:29:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:17.366 "params": { 00:15:17.366 "name": "Nvme1", 00:15:17.366 "trtype": "tcp", 00:15:17.366 "traddr": "10.0.0.2", 00:15:17.366 "adrfam": "ipv4", 00:15:17.366 "trsvcid": "4420", 00:15:17.366 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:17.366 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:17.366 "hdgst": false, 00:15:17.366 "ddgst": false 00:15:17.366 }, 00:15:17.366 "method": "bdev_nvme_attach_controller" 00:15:17.366 }' 00:15:17.626 13:29:23 -- nvmf/common.sh@544 -- # jq . 00:15:17.626 13:29:23 -- nvmf/common.sh@545 -- # IFS=, 00:15:17.626 13:29:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:17.626 "params": { 00:15:17.626 "name": "Nvme1", 00:15:17.626 "trtype": "tcp", 00:15:17.626 "traddr": "10.0.0.2", 00:15:17.626 "adrfam": "ipv4", 00:15:17.626 "trsvcid": "4420", 00:15:17.626 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:17.626 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:17.626 "hdgst": false, 00:15:17.626 "ddgst": false 00:15:17.626 }, 00:15:17.626 "method": "bdev_nvme_attach_controller" 00:15:17.626 }' 00:15:17.626 [2024-12-15 13:29:23.099140] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:17.626 [2024-12-15 13:29:23.099836] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:17.626 13:29:23 -- target/bdev_io_wait.sh@37 -- # wait 84911 00:15:17.626 [2024-12-15 13:29:23.114359] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:17.626 [2024-12-15 13:29:23.114433] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:17.626 [2024-12-15 13:29:23.115271] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:17.626 [2024-12-15 13:29:23.115344] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:17.626 [2024-12-15 13:29:23.116129] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:17.626 [2024-12-15 13:29:23.116203] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:17.626 [2024-12-15 13:29:23.302224] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.884 [2024-12-15 13:29:23.364120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:17.884 [2024-12-15 13:29:23.377944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.884 [2024-12-15 13:29:23.445568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:17.884 [2024-12-15 13:29:23.450471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.885 [2024-12-15 13:29:23.516502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:17.885 [2024-12-15 13:29:23.532049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.885 Running I/O for 1 seconds... 00:15:18.143 Running I/O for 1 seconds... 00:15:18.143 [2024-12-15 13:29:23.596390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:18.143 Running I/O for 1 seconds... 00:15:18.143 Running I/O for 1 seconds... 00:15:19.079 00:15:19.079 Latency(us) 00:15:19.079 [2024-12-15T13:29:24.769Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.079 [2024-12-15T13:29:24.769Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:19.079 Nvme1n1 : 1.00 206036.66 804.83 0.00 0.00 618.90 247.62 1303.27 00:15:19.079 [2024-12-15T13:29:24.769Z] =================================================================================================================== 00:15:19.079 [2024-12-15T13:29:24.769Z] Total : 206036.66 804.83 0.00 0.00 618.90 247.62 1303.27 00:15:19.079 00:15:19.079 Latency(us) 00:15:19.079 [2024-12-15T13:29:24.769Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.079 [2024-12-15T13:29:24.769Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:19.079 Nvme1n1 : 1.01 11313.10 44.19 0.00 0.00 11273.00 2487.39 13405.09 00:15:19.079 [2024-12-15T13:29:24.769Z] =================================================================================================================== 00:15:19.079 [2024-12-15T13:29:24.769Z] Total : 11313.10 44.19 0.00 0.00 11273.00 2487.39 13405.09 00:15:19.079 00:15:19.079 Latency(us) 00:15:19.079 [2024-12-15T13:29:24.769Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.079 [2024-12-15T13:29:24.769Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:19.079 Nvme1n1 : 1.01 8492.32 33.17 0.00 0.00 14997.22 9234.62 25976.09 00:15:19.079 [2024-12-15T13:29:24.769Z] =================================================================================================================== 00:15:19.079 [2024-12-15T13:29:24.769Z] Total : 8492.32 33.17 0.00 0.00 14997.22 9234.62 25976.09 00:15:19.079 00:15:19.079 Latency(us) 00:15:19.079 [2024-12-15T13:29:24.769Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.079 [2024-12-15T13:29:24.770Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:19.080 Nvme1n1 : 1.01 9342.08 36.49 0.00 0.00 13649.80 6613.18 24188.74 00:15:19.080 [2024-12-15T13:29:24.770Z] =================================================================================================================== 00:15:19.080 [2024-12-15T13:29:24.770Z] Total : 9342.08 36.49 0.00 0.00 13649.80 6613.18 24188.74 00:15:19.338 13:29:24 -- target/bdev_io_wait.sh@38 -- # wait 84913 00:15:19.338 13:29:24 -- target/bdev_io_wait.sh@39 -- # wait 84915 00:15:19.338 13:29:24 -- target/bdev_io_wait.sh@40 -- # wait 84917 00:15:19.597 13:29:25 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:19.597 13:29:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.597 13:29:25 -- common/autotest_common.sh@10 -- # set +x 00:15:19.597 13:29:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.597 13:29:25 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:19.597 13:29:25 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:19.597 13:29:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:19.597 13:29:25 -- nvmf/common.sh@116 -- # sync 00:15:19.597 13:29:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:19.597 13:29:25 -- nvmf/common.sh@119 -- # set +e 00:15:19.597 13:29:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:19.597 13:29:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:19.597 rmmod nvme_tcp 00:15:19.597 rmmod nvme_fabrics 00:15:19.597 rmmod nvme_keyring 00:15:19.597 13:29:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:19.597 13:29:25 -- nvmf/common.sh@123 -- # set -e 00:15:19.597 13:29:25 -- nvmf/common.sh@124 -- # return 0 00:15:19.597 13:29:25 -- nvmf/common.sh@477 -- # '[' -n 84858 ']' 00:15:19.597 13:29:25 -- nvmf/common.sh@478 -- # killprocess 84858 00:15:19.597 13:29:25 -- common/autotest_common.sh@936 -- # '[' -z 84858 ']' 00:15:19.597 13:29:25 -- common/autotest_common.sh@940 -- # kill -0 84858 00:15:19.597 13:29:25 -- common/autotest_common.sh@941 -- # uname 00:15:19.597 13:29:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:19.597 13:29:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84858 00:15:19.597 killing process with pid 84858 00:15:19.597 13:29:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:19.597 13:29:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:19.597 13:29:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84858' 00:15:19.597 13:29:25 -- common/autotest_common.sh@955 -- # kill 84858 00:15:19.597 13:29:25 -- common/autotest_common.sh@960 -- # wait 84858 00:15:19.854 13:29:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:19.854 13:29:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:19.854 13:29:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:19.854 13:29:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:19.854 13:29:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:19.854 13:29:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.854 13:29:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:19.855 13:29:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.855 13:29:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:19.855 00:15:19.855 real 0m4.095s 00:15:19.855 user 0m17.802s 00:15:19.855 sys 0m2.136s 00:15:19.855 13:29:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:19.855 13:29:25 -- common/autotest_common.sh@10 -- # set +x 00:15:19.855 ************************************ 00:15:19.855 END TEST nvmf_bdev_io_wait 00:15:19.855 ************************************ 00:15:19.855 13:29:25 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:19.855 13:29:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:19.855 13:29:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:19.855 13:29:25 -- common/autotest_common.sh@10 -- # set +x 00:15:19.855 ************************************ 00:15:19.855 START TEST nvmf_queue_depth 00:15:19.855 ************************************ 00:15:19.855 13:29:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:19.855 * Looking for test storage... 00:15:19.855 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:19.855 13:29:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:19.855 13:29:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:19.855 13:29:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:20.114 13:29:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:20.114 13:29:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:20.114 13:29:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:20.114 13:29:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:20.114 13:29:25 -- scripts/common.sh@335 -- # IFS=.-: 00:15:20.114 13:29:25 -- scripts/common.sh@335 -- # read -ra ver1 00:15:20.114 13:29:25 -- scripts/common.sh@336 -- # IFS=.-: 00:15:20.114 13:29:25 -- scripts/common.sh@336 -- # read -ra ver2 00:15:20.114 13:29:25 -- scripts/common.sh@337 -- # local 'op=<' 00:15:20.114 13:29:25 -- scripts/common.sh@339 -- # ver1_l=2 00:15:20.114 13:29:25 -- scripts/common.sh@340 -- # ver2_l=1 00:15:20.114 13:29:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:20.114 13:29:25 -- scripts/common.sh@343 -- # case "$op" in 00:15:20.114 13:29:25 -- scripts/common.sh@344 -- # : 1 00:15:20.114 13:29:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:20.114 13:29:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:20.115 13:29:25 -- scripts/common.sh@364 -- # decimal 1 00:15:20.115 13:29:25 -- scripts/common.sh@352 -- # local d=1 00:15:20.115 13:29:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:20.115 13:29:25 -- scripts/common.sh@354 -- # echo 1 00:15:20.115 13:29:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:20.115 13:29:25 -- scripts/common.sh@365 -- # decimal 2 00:15:20.115 13:29:25 -- scripts/common.sh@352 -- # local d=2 00:15:20.115 13:29:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:20.115 13:29:25 -- scripts/common.sh@354 -- # echo 2 00:15:20.115 13:29:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:20.115 13:29:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:20.115 13:29:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:20.115 13:29:25 -- scripts/common.sh@367 -- # return 0 00:15:20.115 13:29:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:20.115 13:29:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:20.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.115 --rc genhtml_branch_coverage=1 00:15:20.115 --rc genhtml_function_coverage=1 00:15:20.115 --rc genhtml_legend=1 00:15:20.115 --rc geninfo_all_blocks=1 00:15:20.115 --rc geninfo_unexecuted_blocks=1 00:15:20.115 00:15:20.115 ' 00:15:20.115 13:29:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:20.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.115 --rc genhtml_branch_coverage=1 00:15:20.115 --rc genhtml_function_coverage=1 00:15:20.115 --rc genhtml_legend=1 00:15:20.115 --rc geninfo_all_blocks=1 00:15:20.115 --rc geninfo_unexecuted_blocks=1 00:15:20.115 00:15:20.115 ' 00:15:20.115 13:29:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:20.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.115 --rc genhtml_branch_coverage=1 00:15:20.115 --rc genhtml_function_coverage=1 00:15:20.115 --rc genhtml_legend=1 00:15:20.115 --rc geninfo_all_blocks=1 00:15:20.115 --rc geninfo_unexecuted_blocks=1 00:15:20.115 00:15:20.115 ' 00:15:20.115 13:29:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:20.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.115 --rc genhtml_branch_coverage=1 00:15:20.115 --rc genhtml_function_coverage=1 00:15:20.115 --rc genhtml_legend=1 00:15:20.115 --rc geninfo_all_blocks=1 00:15:20.115 --rc geninfo_unexecuted_blocks=1 00:15:20.115 00:15:20.115 ' 00:15:20.115 13:29:25 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:20.115 13:29:25 -- nvmf/common.sh@7 -- # uname -s 00:15:20.115 13:29:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:20.115 13:29:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:20.115 13:29:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:20.115 13:29:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:20.115 13:29:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:20.115 13:29:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:20.115 13:29:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:20.115 13:29:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:20.115 13:29:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:20.115 13:29:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:20.115 13:29:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:15:20.115 13:29:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:15:20.115 13:29:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:20.115 13:29:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:20.115 13:29:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:20.115 13:29:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:20.115 13:29:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:20.115 13:29:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:20.115 13:29:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:20.115 13:29:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.115 13:29:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.115 13:29:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.115 13:29:25 -- paths/export.sh@5 -- # export PATH 00:15:20.115 13:29:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.115 13:29:25 -- nvmf/common.sh@46 -- # : 0 00:15:20.115 13:29:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:20.115 13:29:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:20.115 13:29:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:20.115 13:29:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:20.115 13:29:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:20.115 13:29:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:20.115 13:29:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:20.115 13:29:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:20.115 13:29:25 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:20.115 13:29:25 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:20.115 13:29:25 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:20.115 13:29:25 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:20.115 13:29:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:20.115 13:29:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:20.115 13:29:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:20.115 13:29:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:20.115 13:29:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:20.115 13:29:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.115 13:29:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:20.115 13:29:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.115 13:29:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:20.115 13:29:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:20.115 13:29:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:20.115 13:29:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:20.115 13:29:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:20.115 13:29:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:20.115 13:29:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:20.115 13:29:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:20.115 13:29:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:20.115 13:29:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:20.115 13:29:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:20.115 13:29:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:20.115 13:29:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:20.115 13:29:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:20.115 13:29:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:20.115 13:29:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:20.115 13:29:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:20.115 13:29:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:20.115 13:29:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:20.115 13:29:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:20.115 Cannot find device "nvmf_tgt_br" 00:15:20.115 13:29:25 -- nvmf/common.sh@154 -- # true 00:15:20.115 13:29:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:20.115 Cannot find device "nvmf_tgt_br2" 00:15:20.115 13:29:25 -- nvmf/common.sh@155 -- # true 00:15:20.115 13:29:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:20.115 13:29:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:20.115 Cannot find device "nvmf_tgt_br" 00:15:20.115 13:29:25 -- nvmf/common.sh@157 -- # true 00:15:20.115 13:29:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:20.115 Cannot find device "nvmf_tgt_br2" 00:15:20.115 13:29:25 -- nvmf/common.sh@158 -- # true 00:15:20.116 13:29:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:20.116 13:29:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:20.116 13:29:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:20.116 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:20.116 13:29:25 -- nvmf/common.sh@161 -- # true 00:15:20.116 13:29:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:20.116 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:20.116 13:29:25 -- nvmf/common.sh@162 -- # true 00:15:20.116 13:29:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:20.116 13:29:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:20.116 13:29:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:20.116 13:29:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:20.375 13:29:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:20.375 13:29:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:20.375 13:29:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:20.375 13:29:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:20.375 13:29:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:20.375 13:29:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:20.375 13:29:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:20.375 13:29:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:20.375 13:29:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:20.375 13:29:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:20.375 13:29:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:20.375 13:29:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:20.375 13:29:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:20.375 13:29:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:20.375 13:29:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:20.375 13:29:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:20.375 13:29:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:20.375 13:29:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:20.375 13:29:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:20.375 13:29:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:20.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:20.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:15:20.375 00:15:20.375 --- 10.0.0.2 ping statistics --- 00:15:20.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.375 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:15:20.375 13:29:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:20.375 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:20.375 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:15:20.375 00:15:20.375 --- 10.0.0.3 ping statistics --- 00:15:20.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.375 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:15:20.375 13:29:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:20.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:20.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:20.375 00:15:20.375 --- 10.0.0.1 ping statistics --- 00:15:20.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.375 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:20.375 13:29:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:20.375 13:29:25 -- nvmf/common.sh@421 -- # return 0 00:15:20.375 13:29:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:20.375 13:29:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:20.375 13:29:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:20.375 13:29:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:20.375 13:29:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:20.375 13:29:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:20.375 13:29:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:20.375 13:29:25 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:20.375 13:29:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:20.375 13:29:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:20.375 13:29:25 -- common/autotest_common.sh@10 -- # set +x 00:15:20.375 13:29:26 -- nvmf/common.sh@469 -- # nvmfpid=85153 00:15:20.375 13:29:26 -- nvmf/common.sh@470 -- # waitforlisten 85153 00:15:20.375 13:29:26 -- common/autotest_common.sh@829 -- # '[' -z 85153 ']' 00:15:20.375 13:29:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.375 13:29:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:20.375 13:29:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:20.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.375 13:29:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.375 13:29:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:20.375 13:29:26 -- common/autotest_common.sh@10 -- # set +x 00:15:20.375 [2024-12-15 13:29:26.057244] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:20.375 [2024-12-15 13:29:26.057344] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.634 [2024-12-15 13:29:26.200276] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.634 [2024-12-15 13:29:26.264360] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:20.634 [2024-12-15 13:29:26.264533] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.634 [2024-12-15 13:29:26.264549] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.634 [2024-12-15 13:29:26.264560] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.634 [2024-12-15 13:29:26.264607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.572 13:29:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:21.572 13:29:26 -- common/autotest_common.sh@862 -- # return 0 00:15:21.572 13:29:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:21.572 13:29:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:21.572 13:29:26 -- common/autotest_common.sh@10 -- # set +x 00:15:21.572 13:29:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.572 13:29:27 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:21.572 13:29:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.572 13:29:27 -- common/autotest_common.sh@10 -- # set +x 00:15:21.572 [2024-12-15 13:29:27.025858] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:21.572 13:29:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.572 13:29:27 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:21.572 13:29:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.572 13:29:27 -- common/autotest_common.sh@10 -- # set +x 00:15:21.572 Malloc0 00:15:21.572 13:29:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.572 13:29:27 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:21.572 13:29:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.572 13:29:27 -- common/autotest_common.sh@10 -- # set +x 00:15:21.572 13:29:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.572 13:29:27 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:21.572 13:29:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.572 13:29:27 -- common/autotest_common.sh@10 -- # set +x 00:15:21.572 13:29:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.572 13:29:27 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:21.572 13:29:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.572 13:29:27 -- common/autotest_common.sh@10 -- # set +x 00:15:21.572 [2024-12-15 13:29:27.090228] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:21.572 13:29:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.572 13:29:27 -- target/queue_depth.sh@30 -- # bdevperf_pid=85203 00:15:21.572 13:29:27 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:21.572 13:29:27 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:21.572 13:29:27 -- target/queue_depth.sh@33 -- # waitforlisten 85203 /var/tmp/bdevperf.sock 00:15:21.572 13:29:27 -- common/autotest_common.sh@829 -- # '[' -z 85203 ']' 00:15:21.572 13:29:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:21.572 13:29:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:21.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:21.572 13:29:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:21.572 13:29:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:21.572 13:29:27 -- common/autotest_common.sh@10 -- # set +x 00:15:21.572 [2024-12-15 13:29:27.149837] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:21.572 [2024-12-15 13:29:27.149928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85203 ] 00:15:21.831 [2024-12-15 13:29:27.288874] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.831 [2024-12-15 13:29:27.344074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.768 13:29:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:22.768 13:29:28 -- common/autotest_common.sh@862 -- # return 0 00:15:22.768 13:29:28 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:22.768 13:29:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.768 13:29:28 -- common/autotest_common.sh@10 -- # set +x 00:15:22.768 NVMe0n1 00:15:22.768 13:29:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.768 13:29:28 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:22.768 Running I/O for 10 seconds... 00:15:32.746 00:15:32.746 Latency(us) 00:15:32.746 [2024-12-15T13:29:38.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.746 [2024-12-15T13:29:38.436Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:32.746 Verification LBA range: start 0x0 length 0x4000 00:15:32.746 NVMe0n1 : 10.05 17080.36 66.72 0.00 0.00 59768.42 11260.28 48854.11 00:15:32.746 [2024-12-15T13:29:38.436Z] =================================================================================================================== 00:15:32.746 [2024-12-15T13:29:38.436Z] Total : 17080.36 66.72 0.00 0.00 59768.42 11260.28 48854.11 00:15:32.746 0 00:15:32.746 13:29:38 -- target/queue_depth.sh@39 -- # killprocess 85203 00:15:32.746 13:29:38 -- common/autotest_common.sh@936 -- # '[' -z 85203 ']' 00:15:32.746 13:29:38 -- common/autotest_common.sh@940 -- # kill -0 85203 00:15:32.746 13:29:38 -- common/autotest_common.sh@941 -- # uname 00:15:32.746 13:29:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:32.746 13:29:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85203 00:15:33.005 13:29:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:33.005 13:29:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:33.005 killing process with pid 85203 00:15:33.005 13:29:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85203' 00:15:33.005 Received shutdown signal, test time was about 10.000000 seconds 00:15:33.005 00:15:33.005 Latency(us) 00:15:33.005 [2024-12-15T13:29:38.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.005 [2024-12-15T13:29:38.695Z] =================================================================================================================== 00:15:33.005 [2024-12-15T13:29:38.695Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:33.005 13:29:38 -- common/autotest_common.sh@955 -- # kill 85203 00:15:33.005 13:29:38 -- common/autotest_common.sh@960 -- # wait 85203 00:15:33.005 13:29:38 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:33.005 13:29:38 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:33.005 13:29:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:33.005 13:29:38 -- nvmf/common.sh@116 -- # sync 00:15:33.264 13:29:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:33.264 13:29:38 -- nvmf/common.sh@119 -- # set +e 00:15:33.264 13:29:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:33.264 13:29:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:33.264 rmmod nvme_tcp 00:15:33.264 rmmod nvme_fabrics 00:15:33.264 rmmod nvme_keyring 00:15:33.264 13:29:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:33.264 13:29:38 -- nvmf/common.sh@123 -- # set -e 00:15:33.264 13:29:38 -- nvmf/common.sh@124 -- # return 0 00:15:33.264 13:29:38 -- nvmf/common.sh@477 -- # '[' -n 85153 ']' 00:15:33.264 13:29:38 -- nvmf/common.sh@478 -- # killprocess 85153 00:15:33.264 13:29:38 -- common/autotest_common.sh@936 -- # '[' -z 85153 ']' 00:15:33.264 13:29:38 -- common/autotest_common.sh@940 -- # kill -0 85153 00:15:33.264 13:29:38 -- common/autotest_common.sh@941 -- # uname 00:15:33.264 13:29:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:33.264 13:29:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85153 00:15:33.264 13:29:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:33.264 13:29:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:33.264 killing process with pid 85153 00:15:33.264 13:29:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85153' 00:15:33.264 13:29:38 -- common/autotest_common.sh@955 -- # kill 85153 00:15:33.264 13:29:38 -- common/autotest_common.sh@960 -- # wait 85153 00:15:33.523 13:29:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:33.523 13:29:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:33.523 13:29:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:33.523 13:29:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:33.523 13:29:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:33.523 13:29:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.523 13:29:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:33.523 13:29:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.523 13:29:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:33.523 ************************************ 00:15:33.523 END TEST nvmf_queue_depth 00:15:33.523 ************************************ 00:15:33.523 00:15:33.523 real 0m13.588s 00:15:33.523 user 0m23.082s 00:15:33.523 sys 0m2.213s 00:15:33.523 13:29:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:33.523 13:29:39 -- common/autotest_common.sh@10 -- # set +x 00:15:33.523 13:29:39 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:33.523 13:29:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:33.523 13:29:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:33.523 13:29:39 -- common/autotest_common.sh@10 -- # set +x 00:15:33.523 ************************************ 00:15:33.523 START TEST nvmf_multipath 00:15:33.523 ************************************ 00:15:33.523 13:29:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:33.523 * Looking for test storage... 00:15:33.523 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:33.523 13:29:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:33.523 13:29:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:33.523 13:29:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:33.783 13:29:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:33.783 13:29:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:33.783 13:29:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:33.783 13:29:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:33.783 13:29:39 -- scripts/common.sh@335 -- # IFS=.-: 00:15:33.783 13:29:39 -- scripts/common.sh@335 -- # read -ra ver1 00:15:33.783 13:29:39 -- scripts/common.sh@336 -- # IFS=.-: 00:15:33.783 13:29:39 -- scripts/common.sh@336 -- # read -ra ver2 00:15:33.783 13:29:39 -- scripts/common.sh@337 -- # local 'op=<' 00:15:33.783 13:29:39 -- scripts/common.sh@339 -- # ver1_l=2 00:15:33.783 13:29:39 -- scripts/common.sh@340 -- # ver2_l=1 00:15:33.783 13:29:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:33.783 13:29:39 -- scripts/common.sh@343 -- # case "$op" in 00:15:33.783 13:29:39 -- scripts/common.sh@344 -- # : 1 00:15:33.783 13:29:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:33.783 13:29:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:33.783 13:29:39 -- scripts/common.sh@364 -- # decimal 1 00:15:33.783 13:29:39 -- scripts/common.sh@352 -- # local d=1 00:15:33.783 13:29:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:33.783 13:29:39 -- scripts/common.sh@354 -- # echo 1 00:15:33.783 13:29:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:33.783 13:29:39 -- scripts/common.sh@365 -- # decimal 2 00:15:33.783 13:29:39 -- scripts/common.sh@352 -- # local d=2 00:15:33.783 13:29:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:33.783 13:29:39 -- scripts/common.sh@354 -- # echo 2 00:15:33.783 13:29:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:33.783 13:29:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:33.783 13:29:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:33.783 13:29:39 -- scripts/common.sh@367 -- # return 0 00:15:33.783 13:29:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:33.783 13:29:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:33.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.783 --rc genhtml_branch_coverage=1 00:15:33.783 --rc genhtml_function_coverage=1 00:15:33.783 --rc genhtml_legend=1 00:15:33.783 --rc geninfo_all_blocks=1 00:15:33.783 --rc geninfo_unexecuted_blocks=1 00:15:33.783 00:15:33.783 ' 00:15:33.783 13:29:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:33.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.783 --rc genhtml_branch_coverage=1 00:15:33.783 --rc genhtml_function_coverage=1 00:15:33.783 --rc genhtml_legend=1 00:15:33.783 --rc geninfo_all_blocks=1 00:15:33.783 --rc geninfo_unexecuted_blocks=1 00:15:33.783 00:15:33.783 ' 00:15:33.783 13:29:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:33.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.783 --rc genhtml_branch_coverage=1 00:15:33.783 --rc genhtml_function_coverage=1 00:15:33.783 --rc genhtml_legend=1 00:15:33.783 --rc geninfo_all_blocks=1 00:15:33.783 --rc geninfo_unexecuted_blocks=1 00:15:33.783 00:15:33.783 ' 00:15:33.783 13:29:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:33.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.783 --rc genhtml_branch_coverage=1 00:15:33.783 --rc genhtml_function_coverage=1 00:15:33.783 --rc genhtml_legend=1 00:15:33.783 --rc geninfo_all_blocks=1 00:15:33.783 --rc geninfo_unexecuted_blocks=1 00:15:33.783 00:15:33.783 ' 00:15:33.783 13:29:39 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:33.783 13:29:39 -- nvmf/common.sh@7 -- # uname -s 00:15:33.783 13:29:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:33.783 13:29:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:33.783 13:29:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:33.783 13:29:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:33.783 13:29:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:33.783 13:29:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:33.783 13:29:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:33.783 13:29:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:33.783 13:29:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:33.783 13:29:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:33.783 13:29:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:15:33.783 13:29:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:15:33.783 13:29:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:33.783 13:29:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:33.783 13:29:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:33.783 13:29:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:33.783 13:29:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:33.783 13:29:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:33.783 13:29:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:33.783 13:29:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.783 13:29:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.783 13:29:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.783 13:29:39 -- paths/export.sh@5 -- # export PATH 00:15:33.783 13:29:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.783 13:29:39 -- nvmf/common.sh@46 -- # : 0 00:15:33.783 13:29:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:33.783 13:29:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:33.783 13:29:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:33.783 13:29:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:33.783 13:29:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:33.783 13:29:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:33.783 13:29:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:33.783 13:29:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:33.783 13:29:39 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:33.783 13:29:39 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:33.783 13:29:39 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:33.783 13:29:39 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:33.783 13:29:39 -- target/multipath.sh@43 -- # nvmftestinit 00:15:33.783 13:29:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:33.783 13:29:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:33.783 13:29:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:33.783 13:29:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:33.783 13:29:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:33.783 13:29:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.783 13:29:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:33.783 13:29:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.783 13:29:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:33.783 13:29:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:33.783 13:29:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:33.784 13:29:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:33.784 13:29:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:33.784 13:29:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:33.784 13:29:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:33.784 13:29:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:33.784 13:29:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:33.784 13:29:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:33.784 13:29:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:33.784 13:29:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:33.784 13:29:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:33.784 13:29:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:33.784 13:29:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:33.784 13:29:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:33.784 13:29:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:33.784 13:29:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:33.784 13:29:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:33.784 13:29:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:33.784 Cannot find device "nvmf_tgt_br" 00:15:33.784 13:29:39 -- nvmf/common.sh@154 -- # true 00:15:33.784 13:29:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:33.784 Cannot find device "nvmf_tgt_br2" 00:15:33.784 13:29:39 -- nvmf/common.sh@155 -- # true 00:15:33.784 13:29:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:33.784 13:29:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:33.784 Cannot find device "nvmf_tgt_br" 00:15:33.784 13:29:39 -- nvmf/common.sh@157 -- # true 00:15:33.784 13:29:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:33.784 Cannot find device "nvmf_tgt_br2" 00:15:33.784 13:29:39 -- nvmf/common.sh@158 -- # true 00:15:33.784 13:29:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:33.784 13:29:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:33.784 13:29:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:33.784 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:33.784 13:29:39 -- nvmf/common.sh@161 -- # true 00:15:33.784 13:29:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:33.784 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:33.784 13:29:39 -- nvmf/common.sh@162 -- # true 00:15:33.784 13:29:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:33.784 13:29:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:33.784 13:29:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:33.784 13:29:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:33.784 13:29:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:33.784 13:29:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:34.043 13:29:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:34.043 13:29:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:34.043 13:29:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:34.043 13:29:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:34.043 13:29:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:34.043 13:29:39 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:34.043 13:29:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:34.043 13:29:39 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:34.043 13:29:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:34.043 13:29:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:34.043 13:29:39 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:34.043 13:29:39 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:34.043 13:29:39 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:34.043 13:29:39 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:34.043 13:29:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:34.043 13:29:39 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:34.043 13:29:39 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:34.043 13:29:39 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:34.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:34.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:15:34.043 00:15:34.043 --- 10.0.0.2 ping statistics --- 00:15:34.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.043 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:15:34.043 13:29:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:34.043 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:34.043 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:15:34.043 00:15:34.043 --- 10.0.0.3 ping statistics --- 00:15:34.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.043 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:15:34.043 13:29:39 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:34.043 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:34.043 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:15:34.043 00:15:34.043 --- 10.0.0.1 ping statistics --- 00:15:34.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.043 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:15:34.043 13:29:39 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:34.043 13:29:39 -- nvmf/common.sh@421 -- # return 0 00:15:34.043 13:29:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:34.043 13:29:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:34.043 13:29:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:34.043 13:29:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:34.043 13:29:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:34.043 13:29:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:34.043 13:29:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:34.043 13:29:39 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:15:34.043 13:29:39 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:15:34.043 13:29:39 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:15:34.043 13:29:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:34.043 13:29:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:34.043 13:29:39 -- common/autotest_common.sh@10 -- # set +x 00:15:34.043 13:29:39 -- nvmf/common.sh@469 -- # nvmfpid=85544 00:15:34.043 13:29:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:34.043 13:29:39 -- nvmf/common.sh@470 -- # waitforlisten 85544 00:15:34.043 13:29:39 -- common/autotest_common.sh@829 -- # '[' -z 85544 ']' 00:15:34.043 13:29:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.043 13:29:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:34.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.043 13:29:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.043 13:29:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:34.043 13:29:39 -- common/autotest_common.sh@10 -- # set +x 00:15:34.043 [2024-12-15 13:29:39.681706] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:34.043 [2024-12-15 13:29:39.681821] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.302 [2024-12-15 13:29:39.819441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:34.302 [2024-12-15 13:29:39.883038] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:34.302 [2024-12-15 13:29:39.883195] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.302 [2024-12-15 13:29:39.883207] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.302 [2024-12-15 13:29:39.883215] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.302 [2024-12-15 13:29:39.883844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.302 [2024-12-15 13:29:39.883957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:34.302 [2024-12-15 13:29:39.884102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:34.302 [2024-12-15 13:29:39.884105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.237 13:29:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:35.237 13:29:40 -- common/autotest_common.sh@862 -- # return 0 00:15:35.237 13:29:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:35.237 13:29:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:35.237 13:29:40 -- common/autotest_common.sh@10 -- # set +x 00:15:35.237 13:29:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:35.237 13:29:40 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:35.237 [2024-12-15 13:29:40.916216] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:35.495 13:29:40 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:35.754 Malloc0 00:15:35.754 13:29:41 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:15:36.013 13:29:41 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:36.272 13:29:41 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:36.530 [2024-12-15 13:29:42.030165] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.530 13:29:42 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:36.801 [2024-12-15 13:29:42.250323] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:36.801 13:29:42 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:15:37.108 13:29:42 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:15:37.108 13:29:42 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:15:37.108 13:29:42 -- common/autotest_common.sh@1187 -- # local i=0 00:15:37.108 13:29:42 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:37.108 13:29:42 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:37.108 13:29:42 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:39.652 13:29:44 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:39.652 13:29:44 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:39.652 13:29:44 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:39.652 13:29:44 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:39.652 13:29:44 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:39.652 13:29:44 -- common/autotest_common.sh@1197 -- # return 0 00:15:39.652 13:29:44 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:15:39.653 13:29:44 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:15:39.653 13:29:44 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:15:39.653 13:29:44 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:39.653 13:29:44 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:15:39.653 13:29:44 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:15:39.653 13:29:44 -- target/multipath.sh@38 -- # return 0 00:15:39.653 13:29:44 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:15:39.653 13:29:44 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:15:39.653 13:29:44 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:15:39.653 13:29:44 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:15:39.653 13:29:44 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:15:39.653 13:29:44 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:15:39.653 13:29:44 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:15:39.653 13:29:44 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:39.653 13:29:44 -- target/multipath.sh@22 -- # local timeout=20 00:15:39.653 13:29:44 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:39.653 13:29:44 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:39.653 13:29:44 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:39.653 13:29:44 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:15:39.653 13:29:44 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:39.653 13:29:44 -- target/multipath.sh@22 -- # local timeout=20 00:15:39.653 13:29:44 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:39.653 13:29:44 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:39.653 13:29:44 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:39.653 13:29:44 -- target/multipath.sh@85 -- # echo numa 00:15:39.653 13:29:44 -- target/multipath.sh@88 -- # fio_pid=85687 00:15:39.653 13:29:44 -- target/multipath.sh@90 -- # sleep 1 00:15:39.653 13:29:44 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:39.653 [global] 00:15:39.653 thread=1 00:15:39.653 invalidate=1 00:15:39.653 rw=randrw 00:15:39.653 time_based=1 00:15:39.653 runtime=6 00:15:39.653 ioengine=libaio 00:15:39.653 direct=1 00:15:39.653 bs=4096 00:15:39.653 iodepth=128 00:15:39.653 norandommap=0 00:15:39.653 numjobs=1 00:15:39.653 00:15:39.653 verify_dump=1 00:15:39.653 verify_backlog=512 00:15:39.653 verify_state_save=0 00:15:39.653 do_verify=1 00:15:39.653 verify=crc32c-intel 00:15:39.653 [job0] 00:15:39.653 filename=/dev/nvme0n1 00:15:39.653 Could not set queue depth (nvme0n1) 00:15:39.653 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:39.653 fio-3.35 00:15:39.653 Starting 1 thread 00:15:40.218 13:29:45 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:40.477 13:29:46 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:40.735 13:29:46 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:15:40.735 13:29:46 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:40.735 13:29:46 -- target/multipath.sh@22 -- # local timeout=20 00:15:40.735 13:29:46 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:40.735 13:29:46 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:40.735 13:29:46 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:40.735 13:29:46 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:15:40.735 13:29:46 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:40.736 13:29:46 -- target/multipath.sh@22 -- # local timeout=20 00:15:40.736 13:29:46 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:40.736 13:29:46 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:40.736 13:29:46 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:40.736 13:29:46 -- target/multipath.sh@25 -- # sleep 1s 00:15:41.671 13:29:47 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:41.671 13:29:47 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:41.671 13:29:47 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:41.671 13:29:47 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:41.930 13:29:47 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:42.497 13:29:47 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:15:42.497 13:29:47 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:42.497 13:29:47 -- target/multipath.sh@22 -- # local timeout=20 00:15:42.497 13:29:47 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:42.497 13:29:47 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:42.497 13:29:47 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:42.497 13:29:47 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:15:42.497 13:29:47 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:42.497 13:29:47 -- target/multipath.sh@22 -- # local timeout=20 00:15:42.497 13:29:47 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:42.497 13:29:47 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:42.497 13:29:47 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:42.497 13:29:47 -- target/multipath.sh@25 -- # sleep 1s 00:15:43.430 13:29:48 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:43.430 13:29:48 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:43.430 13:29:48 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:43.430 13:29:48 -- target/multipath.sh@104 -- # wait 85687 00:15:45.963 00:15:45.963 job0: (groupid=0, jobs=1): err= 0: pid=85708: Sun Dec 15 13:29:51 2024 00:15:45.963 read: IOPS=12.4k, BW=48.4MiB/s (50.8MB/s)(291MiB/6001msec) 00:15:45.963 slat (usec): min=2, max=13881, avg=45.07, stdev=209.13 00:15:45.963 clat (usec): min=621, max=52853, avg=6991.83, stdev=1778.76 00:15:45.963 lat (usec): min=652, max=52863, avg=7036.90, stdev=1782.77 00:15:45.963 clat percentiles (usec): 00:15:45.963 | 1.00th=[ 4047], 5.00th=[ 5342], 10.00th=[ 5800], 20.00th=[ 6194], 00:15:45.963 | 30.00th=[ 6390], 40.00th=[ 6587], 50.00th=[ 6849], 60.00th=[ 7111], 00:15:45.963 | 70.00th=[ 7373], 80.00th=[ 7701], 90.00th=[ 8160], 95.00th=[ 8717], 00:15:45.963 | 99.00th=[10683], 99.50th=[11469], 99.90th=[17171], 99.95th=[50070], 00:15:45.963 | 99.99th=[52691] 00:15:45.963 bw ( KiB/s): min=12632, max=35224, per=52.69%, avg=26125.82, stdev=7313.28, samples=11 00:15:45.963 iops : min= 3158, max= 8806, avg=6531.45, stdev=1828.32, samples=11 00:15:45.963 write: IOPS=7597, BW=29.7MiB/s (31.1MB/s)(154MiB/5190msec); 0 zone resets 00:15:45.963 slat (usec): min=4, max=6841, avg=56.73, stdev=147.99 00:15:45.963 clat (usec): min=513, max=52602, avg=6105.21, stdev=1889.71 00:15:45.963 lat (usec): min=602, max=52626, avg=6161.94, stdev=1891.39 00:15:45.963 clat percentiles (usec): 00:15:45.963 | 1.00th=[ 3294], 5.00th=[ 4293], 10.00th=[ 5080], 20.00th=[ 5473], 00:15:45.963 | 30.00th=[ 5735], 40.00th=[ 5932], 50.00th=[ 6063], 60.00th=[ 6259], 00:15:45.963 | 70.00th=[ 6456], 80.00th=[ 6652], 90.00th=[ 6915], 95.00th=[ 7242], 00:15:45.963 | 99.00th=[ 9503], 99.50th=[10814], 99.90th=[47973], 99.95th=[50070], 00:15:45.963 | 99.99th=[52691] 00:15:45.963 bw ( KiB/s): min=12904, max=34472, per=86.03%, avg=26143.27, stdev=6899.24, samples=11 00:15:45.963 iops : min= 3226, max= 8618, avg=6535.82, stdev=1724.81, samples=11 00:15:45.963 lat (usec) : 750=0.01%, 1000=0.01% 00:15:45.963 lat (msec) : 2=0.09%, 4=1.76%, 10=96.67%, 20=1.36%, 50=0.06% 00:15:45.963 lat (msec) : 100=0.05% 00:15:45.963 cpu : usr=5.81%, sys=24.01%, ctx=7016, majf=0, minf=145 00:15:45.963 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:15:45.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:45.963 issued rwts: total=74392,39429,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:45.963 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:45.963 00:15:45.963 Run status group 0 (all jobs): 00:15:45.963 READ: bw=48.4MiB/s (50.8MB/s), 48.4MiB/s-48.4MiB/s (50.8MB/s-50.8MB/s), io=291MiB (305MB), run=6001-6001msec 00:15:45.963 WRITE: bw=29.7MiB/s (31.1MB/s), 29.7MiB/s-29.7MiB/s (31.1MB/s-31.1MB/s), io=154MiB (162MB), run=5190-5190msec 00:15:45.963 00:15:45.963 Disk stats (read/write): 00:15:45.963 nvme0n1: ios=73037/38991, merge=0/0, ticks=471122/217725, in_queue=688847, util=98.57% 00:15:45.963 13:29:51 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:45.963 13:29:51 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:45.963 13:29:51 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:15:45.963 13:29:51 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:45.963 13:29:51 -- target/multipath.sh@22 -- # local timeout=20 00:15:45.963 13:29:51 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:45.963 13:29:51 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:45.963 13:29:51 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:45.963 13:29:51 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:15:45.963 13:29:51 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:45.963 13:29:51 -- target/multipath.sh@22 -- # local timeout=20 00:15:45.963 13:29:51 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:45.963 13:29:51 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:45.963 13:29:51 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:15:45.963 13:29:51 -- target/multipath.sh@25 -- # sleep 1s 00:15:46.899 13:29:52 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:46.899 13:29:52 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:46.899 13:29:52 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:46.899 13:29:52 -- target/multipath.sh@113 -- # echo round-robin 00:15:46.899 13:29:52 -- target/multipath.sh@116 -- # fio_pid=85838 00:15:46.899 13:29:52 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:46.899 13:29:52 -- target/multipath.sh@118 -- # sleep 1 00:15:47.158 [global] 00:15:47.158 thread=1 00:15:47.158 invalidate=1 00:15:47.158 rw=randrw 00:15:47.158 time_based=1 00:15:47.158 runtime=6 00:15:47.158 ioengine=libaio 00:15:47.158 direct=1 00:15:47.158 bs=4096 00:15:47.158 iodepth=128 00:15:47.158 norandommap=0 00:15:47.158 numjobs=1 00:15:47.158 00:15:47.158 verify_dump=1 00:15:47.158 verify_backlog=512 00:15:47.158 verify_state_save=0 00:15:47.158 do_verify=1 00:15:47.158 verify=crc32c-intel 00:15:47.158 [job0] 00:15:47.158 filename=/dev/nvme0n1 00:15:47.158 Could not set queue depth (nvme0n1) 00:15:47.158 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:47.158 fio-3.35 00:15:47.158 Starting 1 thread 00:15:48.093 13:29:53 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:48.351 13:29:53 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:48.609 13:29:54 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:15:48.609 13:29:54 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:48.609 13:29:54 -- target/multipath.sh@22 -- # local timeout=20 00:15:48.609 13:29:54 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:48.609 13:29:54 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:48.609 13:29:54 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:48.609 13:29:54 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:15:48.609 13:29:54 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:48.609 13:29:54 -- target/multipath.sh@22 -- # local timeout=20 00:15:48.609 13:29:54 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:48.609 13:29:54 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:48.609 13:29:54 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:48.609 13:29:54 -- target/multipath.sh@25 -- # sleep 1s 00:15:49.543 13:29:55 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:49.543 13:29:55 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:49.543 13:29:55 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:49.543 13:29:55 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:49.801 13:29:55 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:50.059 13:29:55 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:15:50.059 13:29:55 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:50.059 13:29:55 -- target/multipath.sh@22 -- # local timeout=20 00:15:50.059 13:29:55 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:50.059 13:29:55 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:50.059 13:29:55 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:50.059 13:29:55 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:15:50.059 13:29:55 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:50.059 13:29:55 -- target/multipath.sh@22 -- # local timeout=20 00:15:50.059 13:29:55 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:50.059 13:29:55 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:50.059 13:29:55 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:50.059 13:29:55 -- target/multipath.sh@25 -- # sleep 1s 00:15:50.993 13:29:56 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:50.993 13:29:56 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:50.993 13:29:56 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:50.993 13:29:56 -- target/multipath.sh@132 -- # wait 85838 00:15:53.526 00:15:53.526 job0: (groupid=0, jobs=1): err= 0: pid=85865: Sun Dec 15 13:29:58 2024 00:15:53.526 read: IOPS=13.6k, BW=53.0MiB/s (55.5MB/s)(318MiB/6005msec) 00:15:53.526 slat (usec): min=2, max=5475, avg=37.78, stdev=180.21 00:15:53.526 clat (usec): min=592, max=12611, avg=6530.39, stdev=1321.51 00:15:53.526 lat (usec): min=651, max=12620, avg=6568.18, stdev=1335.45 00:15:53.526 clat percentiles (usec): 00:15:53.526 | 1.00th=[ 3359], 5.00th=[ 4178], 10.00th=[ 4686], 20.00th=[ 5538], 00:15:53.526 | 30.00th=[ 6063], 40.00th=[ 6325], 50.00th=[ 6521], 60.00th=[ 6849], 00:15:53.526 | 70.00th=[ 7242], 80.00th=[ 7570], 90.00th=[ 8029], 95.00th=[ 8455], 00:15:53.526 | 99.00th=[10159], 99.50th=[10421], 99.90th=[11469], 99.95th=[11863], 00:15:53.526 | 99.99th=[12387] 00:15:53.526 bw ( KiB/s): min=11528, max=41704, per=52.53%, avg=28489.45, stdev=9620.41, samples=11 00:15:53.526 iops : min= 2882, max=10426, avg=7122.36, stdev=2405.10, samples=11 00:15:53.526 write: IOPS=8321, BW=32.5MiB/s (34.1MB/s)(163MiB/5014msec); 0 zone resets 00:15:53.526 slat (usec): min=15, max=2346, avg=49.97, stdev=121.39 00:15:53.526 clat (usec): min=340, max=11372, avg=5514.83, stdev=1343.20 00:15:53.526 lat (usec): min=393, max=11407, avg=5564.80, stdev=1353.83 00:15:53.526 clat percentiles (usec): 00:15:53.526 | 1.00th=[ 2442], 5.00th=[ 3097], 10.00th=[ 3458], 20.00th=[ 4113], 00:15:53.526 | 30.00th=[ 4948], 40.00th=[ 5604], 50.00th=[ 5932], 60.00th=[ 6128], 00:15:53.526 | 70.00th=[ 6390], 80.00th=[ 6587], 90.00th=[ 6849], 95.00th=[ 7111], 00:15:53.526 | 99.00th=[ 8029], 99.50th=[ 8979], 99.90th=[10421], 99.95th=[10683], 00:15:53.526 | 99.99th=[11207] 00:15:53.526 bw ( KiB/s): min=11808, max=40960, per=85.57%, avg=28482.18, stdev=9421.71, samples=11 00:15:53.526 iops : min= 2952, max=10240, avg=7120.55, stdev=2355.43, samples=11 00:15:53.526 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:15:53.526 lat (msec) : 2=0.13%, 4=8.59%, 10=90.46%, 20=0.80% 00:15:53.526 cpu : usr=6.55%, sys=25.78%, ctx=7963, majf=0, minf=163 00:15:53.526 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:53.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:53.526 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:53.526 issued rwts: total=81411,41723,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:53.526 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:53.526 00:15:53.526 Run status group 0 (all jobs): 00:15:53.526 READ: bw=53.0MiB/s (55.5MB/s), 53.0MiB/s-53.0MiB/s (55.5MB/s-55.5MB/s), io=318MiB (333MB), run=6005-6005msec 00:15:53.526 WRITE: bw=32.5MiB/s (34.1MB/s), 32.5MiB/s-32.5MiB/s (34.1MB/s-34.1MB/s), io=163MiB (171MB), run=5014-5014msec 00:15:53.526 00:15:53.526 Disk stats (read/write): 00:15:53.526 nvme0n1: ios=80446/41014, merge=0/0, ticks=482936/205352, in_queue=688288, util=98.56% 00:15:53.526 13:29:58 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:53.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:53.526 13:29:58 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:53.526 13:29:58 -- common/autotest_common.sh@1208 -- # local i=0 00:15:53.526 13:29:58 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:53.526 13:29:58 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:53.526 13:29:58 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:53.526 13:29:58 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:53.526 13:29:58 -- common/autotest_common.sh@1220 -- # return 0 00:15:53.526 13:29:58 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:53.785 13:29:59 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:15:53.785 13:29:59 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:15:53.785 13:29:59 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:15:53.785 13:29:59 -- target/multipath.sh@144 -- # nvmftestfini 00:15:53.785 13:29:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:53.785 13:29:59 -- nvmf/common.sh@116 -- # sync 00:15:53.785 13:29:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:53.785 13:29:59 -- nvmf/common.sh@119 -- # set +e 00:15:53.785 13:29:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:53.785 13:29:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:53.785 rmmod nvme_tcp 00:15:53.785 rmmod nvme_fabrics 00:15:53.785 rmmod nvme_keyring 00:15:53.785 13:29:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:53.785 13:29:59 -- nvmf/common.sh@123 -- # set -e 00:15:53.785 13:29:59 -- nvmf/common.sh@124 -- # return 0 00:15:53.785 13:29:59 -- nvmf/common.sh@477 -- # '[' -n 85544 ']' 00:15:53.785 13:29:59 -- nvmf/common.sh@478 -- # killprocess 85544 00:15:53.785 13:29:59 -- common/autotest_common.sh@936 -- # '[' -z 85544 ']' 00:15:53.785 13:29:59 -- common/autotest_common.sh@940 -- # kill -0 85544 00:15:53.785 13:29:59 -- common/autotest_common.sh@941 -- # uname 00:15:53.785 13:29:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:53.785 13:29:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85544 00:15:53.785 killing process with pid 85544 00:15:53.785 13:29:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:53.785 13:29:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:53.785 13:29:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85544' 00:15:53.785 13:29:59 -- common/autotest_common.sh@955 -- # kill 85544 00:15:53.785 13:29:59 -- common/autotest_common.sh@960 -- # wait 85544 00:15:54.045 13:29:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:54.045 13:29:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:54.045 13:29:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:54.045 13:29:59 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:54.045 13:29:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:54.045 13:29:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.045 13:29:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:54.045 13:29:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.045 13:29:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:54.045 ************************************ 00:15:54.045 END TEST nvmf_multipath 00:15:54.045 ************************************ 00:15:54.045 00:15:54.045 real 0m20.541s 00:15:54.045 user 1m20.166s 00:15:54.045 sys 0m7.073s 00:15:54.045 13:29:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:54.045 13:29:59 -- common/autotest_common.sh@10 -- # set +x 00:15:54.045 13:29:59 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:54.045 13:29:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:54.045 13:29:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:54.045 13:29:59 -- common/autotest_common.sh@10 -- # set +x 00:15:54.045 ************************************ 00:15:54.045 START TEST nvmf_zcopy 00:15:54.045 ************************************ 00:15:54.045 13:29:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:54.304 * Looking for test storage... 00:15:54.304 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:54.304 13:29:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:54.304 13:29:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:54.304 13:29:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:54.304 13:29:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:54.304 13:29:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:54.304 13:29:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:54.304 13:29:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:54.304 13:29:59 -- scripts/common.sh@335 -- # IFS=.-: 00:15:54.304 13:29:59 -- scripts/common.sh@335 -- # read -ra ver1 00:15:54.304 13:29:59 -- scripts/common.sh@336 -- # IFS=.-: 00:15:54.304 13:29:59 -- scripts/common.sh@336 -- # read -ra ver2 00:15:54.304 13:29:59 -- scripts/common.sh@337 -- # local 'op=<' 00:15:54.304 13:29:59 -- scripts/common.sh@339 -- # ver1_l=2 00:15:54.304 13:29:59 -- scripts/common.sh@340 -- # ver2_l=1 00:15:54.304 13:29:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:54.304 13:29:59 -- scripts/common.sh@343 -- # case "$op" in 00:15:54.304 13:29:59 -- scripts/common.sh@344 -- # : 1 00:15:54.304 13:29:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:54.304 13:29:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:54.304 13:29:59 -- scripts/common.sh@364 -- # decimal 1 00:15:54.304 13:29:59 -- scripts/common.sh@352 -- # local d=1 00:15:54.304 13:29:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:54.304 13:29:59 -- scripts/common.sh@354 -- # echo 1 00:15:54.304 13:29:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:54.304 13:29:59 -- scripts/common.sh@365 -- # decimal 2 00:15:54.304 13:29:59 -- scripts/common.sh@352 -- # local d=2 00:15:54.304 13:29:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:54.304 13:29:59 -- scripts/common.sh@354 -- # echo 2 00:15:54.304 13:29:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:54.304 13:29:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:54.304 13:29:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:54.304 13:29:59 -- scripts/common.sh@367 -- # return 0 00:15:54.304 13:29:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:54.304 13:29:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:54.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.304 --rc genhtml_branch_coverage=1 00:15:54.304 --rc genhtml_function_coverage=1 00:15:54.304 --rc genhtml_legend=1 00:15:54.304 --rc geninfo_all_blocks=1 00:15:54.304 --rc geninfo_unexecuted_blocks=1 00:15:54.304 00:15:54.304 ' 00:15:54.304 13:29:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:54.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.304 --rc genhtml_branch_coverage=1 00:15:54.304 --rc genhtml_function_coverage=1 00:15:54.304 --rc genhtml_legend=1 00:15:54.304 --rc geninfo_all_blocks=1 00:15:54.305 --rc geninfo_unexecuted_blocks=1 00:15:54.305 00:15:54.305 ' 00:15:54.305 13:29:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:54.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.305 --rc genhtml_branch_coverage=1 00:15:54.305 --rc genhtml_function_coverage=1 00:15:54.305 --rc genhtml_legend=1 00:15:54.305 --rc geninfo_all_blocks=1 00:15:54.305 --rc geninfo_unexecuted_blocks=1 00:15:54.305 00:15:54.305 ' 00:15:54.305 13:29:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:54.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.305 --rc genhtml_branch_coverage=1 00:15:54.305 --rc genhtml_function_coverage=1 00:15:54.305 --rc genhtml_legend=1 00:15:54.305 --rc geninfo_all_blocks=1 00:15:54.305 --rc geninfo_unexecuted_blocks=1 00:15:54.305 00:15:54.305 ' 00:15:54.305 13:29:59 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:54.305 13:29:59 -- nvmf/common.sh@7 -- # uname -s 00:15:54.305 13:29:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.305 13:29:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.305 13:29:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.305 13:29:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.305 13:29:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.305 13:29:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.305 13:29:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.305 13:29:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.305 13:29:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.305 13:29:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.305 13:29:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:15:54.305 13:29:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:15:54.305 13:29:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.305 13:29:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.305 13:29:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:54.305 13:29:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:54.305 13:29:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.305 13:29:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.305 13:29:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.305 13:29:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.305 13:29:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.305 13:29:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.305 13:29:59 -- paths/export.sh@5 -- # export PATH 00:15:54.305 13:29:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.305 13:29:59 -- nvmf/common.sh@46 -- # : 0 00:15:54.305 13:29:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:54.305 13:29:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:54.305 13:29:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:54.305 13:29:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.305 13:29:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.305 13:29:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:54.305 13:29:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:54.305 13:29:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:54.305 13:29:59 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:54.305 13:29:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:54.305 13:29:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:54.305 13:29:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:54.305 13:29:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:54.305 13:29:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:54.305 13:29:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.305 13:29:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:54.305 13:29:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.305 13:29:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:54.305 13:29:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:54.305 13:29:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:54.305 13:29:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:54.305 13:29:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:54.305 13:29:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:54.305 13:29:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:54.305 13:29:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:54.305 13:29:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:54.305 13:29:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:54.305 13:29:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:54.305 13:29:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:54.305 13:29:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:54.305 13:29:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:54.305 13:29:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:54.305 13:29:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:54.305 13:29:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:54.305 13:29:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:54.305 13:29:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:54.305 13:29:59 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:54.305 Cannot find device "nvmf_tgt_br" 00:15:54.305 13:29:59 -- nvmf/common.sh@154 -- # true 00:15:54.305 13:29:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:54.305 Cannot find device "nvmf_tgt_br2" 00:15:54.305 13:29:59 -- nvmf/common.sh@155 -- # true 00:15:54.305 13:29:59 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:54.305 13:29:59 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:54.305 Cannot find device "nvmf_tgt_br" 00:15:54.305 13:29:59 -- nvmf/common.sh@157 -- # true 00:15:54.305 13:29:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:54.305 Cannot find device "nvmf_tgt_br2" 00:15:54.305 13:29:59 -- nvmf/common.sh@158 -- # true 00:15:54.305 13:29:59 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:54.564 13:30:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:54.564 13:30:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:54.564 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:54.564 13:30:00 -- nvmf/common.sh@161 -- # true 00:15:54.564 13:30:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:54.564 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:54.564 13:30:00 -- nvmf/common.sh@162 -- # true 00:15:54.564 13:30:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:54.564 13:30:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:54.564 13:30:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:54.564 13:30:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:54.564 13:30:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:54.564 13:30:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:54.564 13:30:00 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:54.564 13:30:00 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:54.564 13:30:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:54.564 13:30:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:54.564 13:30:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:54.564 13:30:00 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:54.564 13:30:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:54.564 13:30:00 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:54.564 13:30:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:54.564 13:30:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:54.564 13:30:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:54.564 13:30:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:54.564 13:30:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:54.564 13:30:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:54.564 13:30:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:54.564 13:30:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:54.564 13:30:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:54.564 13:30:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:54.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:54.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:15:54.564 00:15:54.564 --- 10.0.0.2 ping statistics --- 00:15:54.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.564 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:15:54.564 13:30:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:54.564 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:54.564 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:15:54.564 00:15:54.564 --- 10.0.0.3 ping statistics --- 00:15:54.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.564 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:15:54.564 13:30:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:54.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:54.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:15:54.564 00:15:54.565 --- 10.0.0.1 ping statistics --- 00:15:54.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.565 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:15:54.565 13:30:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:54.565 13:30:00 -- nvmf/common.sh@421 -- # return 0 00:15:54.565 13:30:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:54.565 13:30:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:54.565 13:30:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:54.565 13:30:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:54.565 13:30:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:54.565 13:30:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:54.565 13:30:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:54.565 13:30:00 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:54.565 13:30:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:54.565 13:30:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:54.565 13:30:00 -- common/autotest_common.sh@10 -- # set +x 00:15:54.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.565 13:30:00 -- nvmf/common.sh@469 -- # nvmfpid=86146 00:15:54.565 13:30:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:54.565 13:30:00 -- nvmf/common.sh@470 -- # waitforlisten 86146 00:15:54.565 13:30:00 -- common/autotest_common.sh@829 -- # '[' -z 86146 ']' 00:15:54.565 13:30:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.565 13:30:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:54.565 13:30:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.565 13:30:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:54.565 13:30:00 -- common/autotest_common.sh@10 -- # set +x 00:15:54.823 [2024-12-15 13:30:00.281053] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:54.823 [2024-12-15 13:30:00.281140] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:54.823 [2024-12-15 13:30:00.422716] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.823 [2024-12-15 13:30:00.480192] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:54.823 [2024-12-15 13:30:00.480339] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:54.824 [2024-12-15 13:30:00.480352] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:54.824 [2024-12-15 13:30:00.480360] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:54.824 [2024-12-15 13:30:00.480386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.759 13:30:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:55.759 13:30:01 -- common/autotest_common.sh@862 -- # return 0 00:15:55.759 13:30:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:55.759 13:30:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:55.759 13:30:01 -- common/autotest_common.sh@10 -- # set +x 00:15:55.759 13:30:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.759 13:30:01 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:55.759 13:30:01 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:55.759 13:30:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.759 13:30:01 -- common/autotest_common.sh@10 -- # set +x 00:15:55.759 [2024-12-15 13:30:01.371801] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:55.759 13:30:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.759 13:30:01 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:55.759 13:30:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.759 13:30:01 -- common/autotest_common.sh@10 -- # set +x 00:15:55.759 13:30:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.759 13:30:01 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:55.759 13:30:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.759 13:30:01 -- common/autotest_common.sh@10 -- # set +x 00:15:55.759 [2024-12-15 13:30:01.387921] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:55.759 13:30:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.759 13:30:01 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:55.759 13:30:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.759 13:30:01 -- common/autotest_common.sh@10 -- # set +x 00:15:55.759 13:30:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.759 13:30:01 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:55.759 13:30:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.759 13:30:01 -- common/autotest_common.sh@10 -- # set +x 00:15:55.759 malloc0 00:15:55.759 13:30:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.759 13:30:01 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:55.759 13:30:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.759 13:30:01 -- common/autotest_common.sh@10 -- # set +x 00:15:55.759 13:30:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.759 13:30:01 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:55.759 13:30:01 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:55.759 13:30:01 -- nvmf/common.sh@520 -- # config=() 00:15:55.759 13:30:01 -- nvmf/common.sh@520 -- # local subsystem config 00:15:55.759 13:30:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:55.759 13:30:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:55.759 { 00:15:55.759 "params": { 00:15:55.759 "name": "Nvme$subsystem", 00:15:55.759 "trtype": "$TEST_TRANSPORT", 00:15:55.759 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:55.759 "adrfam": "ipv4", 00:15:55.759 "trsvcid": "$NVMF_PORT", 00:15:55.759 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:55.759 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:55.759 "hdgst": ${hdgst:-false}, 00:15:55.759 "ddgst": ${ddgst:-false} 00:15:55.759 }, 00:15:55.759 "method": "bdev_nvme_attach_controller" 00:15:55.759 } 00:15:55.759 EOF 00:15:55.759 )") 00:15:55.759 13:30:01 -- nvmf/common.sh@542 -- # cat 00:15:55.759 13:30:01 -- nvmf/common.sh@544 -- # jq . 00:15:55.759 13:30:01 -- nvmf/common.sh@545 -- # IFS=, 00:15:55.759 13:30:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:55.759 "params": { 00:15:55.759 "name": "Nvme1", 00:15:55.759 "trtype": "tcp", 00:15:55.759 "traddr": "10.0.0.2", 00:15:55.759 "adrfam": "ipv4", 00:15:55.759 "trsvcid": "4420", 00:15:55.759 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:55.759 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:55.759 "hdgst": false, 00:15:55.759 "ddgst": false 00:15:55.759 }, 00:15:55.759 "method": "bdev_nvme_attach_controller" 00:15:55.759 }' 00:15:56.017 [2024-12-15 13:30:01.477822] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:56.017 [2024-12-15 13:30:01.478100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86201 ] 00:15:56.017 [2024-12-15 13:30:01.620747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.276 [2024-12-15 13:30:01.719531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.276 Running I/O for 10 seconds... 00:16:06.253 00:16:06.253 Latency(us) 00:16:06.253 [2024-12-15T13:30:11.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.253 [2024-12-15T13:30:11.943Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:06.253 Verification LBA range: start 0x0 length 0x1000 00:16:06.253 Nvme1n1 : 10.01 10904.81 85.19 0.00 0.00 11707.38 1243.69 20971.52 00:16:06.253 [2024-12-15T13:30:11.944Z] =================================================================================================================== 00:16:06.254 [2024-12-15T13:30:11.944Z] Total : 10904.81 85.19 0.00 0.00 11707.38 1243.69 20971.52 00:16:06.512 13:30:12 -- target/zcopy.sh@39 -- # perfpid=86320 00:16:06.512 13:30:12 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:06.512 13:30:12 -- target/zcopy.sh@41 -- # xtrace_disable 00:16:06.512 13:30:12 -- common/autotest_common.sh@10 -- # set +x 00:16:06.512 13:30:12 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:06.512 13:30:12 -- nvmf/common.sh@520 -- # config=() 00:16:06.512 13:30:12 -- nvmf/common.sh@520 -- # local subsystem config 00:16:06.512 13:30:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:06.512 13:30:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:06.512 { 00:16:06.512 "params": { 00:16:06.512 "name": "Nvme$subsystem", 00:16:06.512 "trtype": "$TEST_TRANSPORT", 00:16:06.512 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:06.512 "adrfam": "ipv4", 00:16:06.512 "trsvcid": "$NVMF_PORT", 00:16:06.512 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:06.512 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:06.512 "hdgst": ${hdgst:-false}, 00:16:06.512 "ddgst": ${ddgst:-false} 00:16:06.512 }, 00:16:06.512 "method": "bdev_nvme_attach_controller" 00:16:06.512 } 00:16:06.512 EOF 00:16:06.512 )") 00:16:06.512 13:30:12 -- nvmf/common.sh@542 -- # cat 00:16:06.512 [2024-12-15 13:30:12.111065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.512 [2024-12-15 13:30:12.111108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.512 13:30:12 -- nvmf/common.sh@544 -- # jq . 00:16:06.512 13:30:12 -- nvmf/common.sh@545 -- # IFS=, 00:16:06.512 13:30:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:06.512 "params": { 00:16:06.512 "name": "Nvme1", 00:16:06.512 "trtype": "tcp", 00:16:06.512 "traddr": "10.0.0.2", 00:16:06.512 "adrfam": "ipv4", 00:16:06.512 "trsvcid": "4420", 00:16:06.512 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:06.512 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:06.512 "hdgst": false, 00:16:06.512 "ddgst": false 00:16:06.512 }, 00:16:06.512 "method": "bdev_nvme_attach_controller" 00:16:06.512 }' 00:16:06.512 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.512 [2024-12-15 13:30:12.123010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.512 [2024-12-15 13:30:12.123036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.512 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.512 [2024-12-15 13:30:12.130990] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.512 [2024-12-15 13:30:12.131029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.512 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.512 [2024-12-15 13:30:12.138961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.512 [2024-12-15 13:30:12.138987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.512 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.512 [2024-12-15 13:30:12.144621] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:06.512 [2024-12-15 13:30:12.144697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86320 ] 00:16:06.512 [2024-12-15 13:30:12.150982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.513 [2024-12-15 13:30:12.151007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.513 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.513 [2024-12-15 13:30:12.158982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.513 [2024-12-15 13:30:12.159005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.513 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.513 [2024-12-15 13:30:12.167027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.513 [2024-12-15 13:30:12.167050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.513 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.513 [2024-12-15 13:30:12.175005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.513 [2024-12-15 13:30:12.175029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.513 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.513 [2024-12-15 13:30:12.187028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.513 [2024-12-15 13:30:12.187051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.513 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.513 [2024-12-15 13:30:12.199028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.513 [2024-12-15 13:30:12.199051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.772 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.772 [2024-12-15 13:30:12.211039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.772 [2024-12-15 13:30:12.211061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.772 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.772 [2024-12-15 13:30:12.223031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.772 [2024-12-15 13:30:12.223053] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.772 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.772 [2024-12-15 13:30:12.235021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.772 [2024-12-15 13:30:12.235204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.772 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.772 [2024-12-15 13:30:12.247048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.772 [2024-12-15 13:30:12.247072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.772 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.772 [2024-12-15 13:30:12.259049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.772 [2024-12-15 13:30:12.259219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.772 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.772 [2024-12-15 13:30:12.271072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.772 [2024-12-15 13:30:12.271097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.772 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.772 [2024-12-15 13:30:12.277289] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.772 [2024-12-15 13:30:12.283080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.772 [2024-12-15 13:30:12.283122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.772 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.772 [2024-12-15 13:30:12.295093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.772 [2024-12-15 13:30:12.295131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.772 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.772 [2024-12-15 13:30:12.303099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.772 [2024-12-15 13:30:12.303120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.772 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.772 [2024-12-15 13:30:12.315112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.772 [2024-12-15 13:30:12.315139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.772 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.772 [2024-12-15 13:30:12.327110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.772 [2024-12-15 13:30:12.327154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.772 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.772 [2024-12-15 13:30:12.339109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.772 [2024-12-15 13:30:12.339149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.772 [2024-12-15 13:30:12.339863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.772 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.772 [2024-12-15 13:30:12.347100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.772 [2024-12-15 13:30:12.347121] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.772 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.772 [2024-12-15 13:30:12.359121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.772 [2024-12-15 13:30:12.359148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.772 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.772 [2024-12-15 13:30:12.371127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.772 [2024-12-15 13:30:12.371171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.772 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.772 [2024-12-15 13:30:12.383129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.773 [2024-12-15 13:30:12.383173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.773 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.773 [2024-12-15 13:30:12.395133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.773 [2024-12-15 13:30:12.395177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.773 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.773 [2024-12-15 13:30:12.407146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.773 [2024-12-15 13:30:12.407188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.773 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.773 [2024-12-15 13:30:12.419139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.773 [2024-12-15 13:30:12.419182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.773 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.773 [2024-12-15 13:30:12.431132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.773 [2024-12-15 13:30:12.431174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.773 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.773 [2024-12-15 13:30:12.443152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.773 [2024-12-15 13:30:12.443198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.773 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.773 [2024-12-15 13:30:12.455146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.773 [2024-12-15 13:30:12.455190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.773 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.032 [2024-12-15 13:30:12.463128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.032 [2024-12-15 13:30:12.463155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.032 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.032 [2024-12-15 13:30:12.475152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.032 [2024-12-15 13:30:12.475180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.032 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.032 [2024-12-15 13:30:12.483124] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.032 [2024-12-15 13:30:12.483151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.032 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.032 [2024-12-15 13:30:12.491123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.032 [2024-12-15 13:30:12.491148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.032 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.032 [2024-12-15 13:30:12.499142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.032 [2024-12-15 13:30:12.499166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.032 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.032 [2024-12-15 13:30:12.507135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.032 [2024-12-15 13:30:12.507164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.032 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.032 Running I/O for 5 seconds... 00:16:07.032 [2024-12-15 13:30:12.515133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.032 [2024-12-15 13:30:12.515157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.032 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.032 [2024-12-15 13:30:12.528209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.032 [2024-12-15 13:30:12.528257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.032 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.032 [2024-12-15 13:30:12.539730] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.032 [2024-12-15 13:30:12.539777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.032 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.032 [2024-12-15 13:30:12.554910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.032 [2024-12-15 13:30:12.554959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.032 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.032 [2024-12-15 13:30:12.566381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.033 [2024-12-15 13:30:12.566428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.033 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.033 [2024-12-15 13:30:12.582250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.033 [2024-12-15 13:30:12.582299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.033 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.033 [2024-12-15 13:30:12.599113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.033 [2024-12-15 13:30:12.599163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.033 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.033 [2024-12-15 13:30:12.610167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.033 [2024-12-15 13:30:12.610215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.033 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.033 [2024-12-15 13:30:12.625705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.033 [2024-12-15 13:30:12.625755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.033 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.033 [2024-12-15 13:30:12.642472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.033 [2024-12-15 13:30:12.642519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.033 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.033 [2024-12-15 13:30:12.652985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.033 [2024-12-15 13:30:12.653031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.033 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.033 [2024-12-15 13:30:12.668651] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.033 [2024-12-15 13:30:12.668699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.033 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.033 [2024-12-15 13:30:12.685460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.033 [2024-12-15 13:30:12.685533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.033 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.033 [2024-12-15 13:30:12.701577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.033 [2024-12-15 13:30:12.701638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.033 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.033 [2024-12-15 13:30:12.717964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.033 [2024-12-15 13:30:12.718013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.292 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.292 [2024-12-15 13:30:12.735193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.292 [2024-12-15 13:30:12.735240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.292 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.292 [2024-12-15 13:30:12.751834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.292 [2024-12-15 13:30:12.751884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.292 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.292 [2024-12-15 13:30:12.762507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.292 [2024-12-15 13:30:12.762570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.292 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.292 [2024-12-15 13:30:12.777022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.292 [2024-12-15 13:30:12.777071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.292 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.292 [2024-12-15 13:30:12.786381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.292 [2024-12-15 13:30:12.786427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.292 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.292 [2024-12-15 13:30:12.802778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.292 [2024-12-15 13:30:12.802823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.292 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.292 [2024-12-15 13:30:12.820882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.292 [2024-12-15 13:30:12.820930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.292 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.292 [2024-12-15 13:30:12.830143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.292 [2024-12-15 13:30:12.830189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.293 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.293 [2024-12-15 13:30:12.843696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.293 [2024-12-15 13:30:12.843742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.293 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.293 [2024-12-15 13:30:12.852638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.293 [2024-12-15 13:30:12.852662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.293 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.293 [2024-12-15 13:30:12.861787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.293 [2024-12-15 13:30:12.861815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.293 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.293 [2024-12-15 13:30:12.872188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.293 [2024-12-15 13:30:12.872232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.293 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.293 [2024-12-15 13:30:12.883612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.293 [2024-12-15 13:30:12.883653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.293 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.293 [2024-12-15 13:30:12.899625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.293 [2024-12-15 13:30:12.899669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.293 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.293 [2024-12-15 13:30:12.916323] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.293 [2024-12-15 13:30:12.916367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.293 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.293 [2024-12-15 13:30:12.933188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.293 [2024-12-15 13:30:12.933231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.293 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.293 [2024-12-15 13:30:12.949543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.293 [2024-12-15 13:30:12.949570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.293 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.293 [2024-12-15 13:30:12.960456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.293 [2024-12-15 13:30:12.960499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.293 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.293 [2024-12-15 13:30:12.976311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.293 [2024-12-15 13:30:12.976353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.293 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.553 [2024-12-15 13:30:12.991996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.553 [2024-12-15 13:30:12.992038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.553 2024/12/15 13:30:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.553 [2024-12-15 13:30:13.009300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.553 [2024-12-15 13:30:13.009344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.553 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.553 [2024-12-15 13:30:13.026179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.553 [2024-12-15 13:30:13.026222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.553 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.553 [2024-12-15 13:30:13.036550] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.553 [2024-12-15 13:30:13.036594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.553 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.553 [2024-12-15 13:30:13.052463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.553 [2024-12-15 13:30:13.052506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.553 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.553 [2024-12-15 13:30:13.068484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.553 [2024-12-15 13:30:13.068526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.553 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.553 [2024-12-15 13:30:13.078996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.553 [2024-12-15 13:30:13.079022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.553 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.553 [2024-12-15 13:30:13.095129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.553 [2024-12-15 13:30:13.095172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.553 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.553 [2024-12-15 13:30:13.112013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.553 [2024-12-15 13:30:13.112072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.553 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.553 [2024-12-15 13:30:13.126432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.553 [2024-12-15 13:30:13.126476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.553 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.553 [2024-12-15 13:30:13.135122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.553 [2024-12-15 13:30:13.135165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.553 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.553 [2024-12-15 13:30:13.149643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.553 [2024-12-15 13:30:13.149669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.553 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.553 [2024-12-15 13:30:13.158445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.553 [2024-12-15 13:30:13.158489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.553 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.553 [2024-12-15 13:30:13.167968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.553 [2024-12-15 13:30:13.168011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.553 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.553 [2024-12-15 13:30:13.177879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.553 [2024-12-15 13:30:13.177922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.553 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.553 [2024-12-15 13:30:13.190888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.553 [2024-12-15 13:30:13.190932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.553 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.553 [2024-12-15 13:30:13.205888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.553 [2024-12-15 13:30:13.205932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.553 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.553 [2024-12-15 13:30:13.217115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.553 [2024-12-15 13:30:13.217158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.553 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.553 [2024-12-15 13:30:13.232696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.553 [2024-12-15 13:30:13.232739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.553 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.813 [2024-12-15 13:30:13.248830] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.813 [2024-12-15 13:30:13.248874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.813 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.813 [2024-12-15 13:30:13.265822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.813 [2024-12-15 13:30:13.265867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.813 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.813 [2024-12-15 13:30:13.281283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.813 [2024-12-15 13:30:13.281326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.813 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.813 [2024-12-15 13:30:13.289591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.813 [2024-12-15 13:30:13.289629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.813 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.813 [2024-12-15 13:30:13.304265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.813 [2024-12-15 13:30:13.304308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.813 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.813 [2024-12-15 13:30:13.320644] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.813 [2024-12-15 13:30:13.320685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.813 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.813 [2024-12-15 13:30:13.336596] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.813 [2024-12-15 13:30:13.336638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.813 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.813 [2024-12-15 13:30:13.351270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.813 [2024-12-15 13:30:13.351313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.813 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.813 [2024-12-15 13:30:13.361956] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.813 [2024-12-15 13:30:13.361999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.813 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.813 [2024-12-15 13:30:13.370707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.813 [2024-12-15 13:30:13.370749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.813 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.813 [2024-12-15 13:30:13.380212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.813 [2024-12-15 13:30:13.380254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.813 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.813 [2024-12-15 13:30:13.389458] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.813 [2024-12-15 13:30:13.389509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.813 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.813 [2024-12-15 13:30:13.399109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.813 [2024-12-15 13:30:13.399153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.813 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.813 [2024-12-15 13:30:13.408400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.813 [2024-12-15 13:30:13.408443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.813 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.813 [2024-12-15 13:30:13.417943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.813 [2024-12-15 13:30:13.417987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.813 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.814 [2024-12-15 13:30:13.431036] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.814 [2024-12-15 13:30:13.431081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.814 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.814 [2024-12-15 13:30:13.441048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.814 [2024-12-15 13:30:13.441091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.814 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.814 [2024-12-15 13:30:13.457425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.814 [2024-12-15 13:30:13.457468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.814 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.814 [2024-12-15 13:30:13.473468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.814 [2024-12-15 13:30:13.473519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.814 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.814 [2024-12-15 13:30:13.489674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.814 [2024-12-15 13:30:13.489702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.814 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.077 [2024-12-15 13:30:13.501740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.077 [2024-12-15 13:30:13.501783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.077 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.077 [2024-12-15 13:30:13.517526] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.077 [2024-12-15 13:30:13.517554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.077 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.077 [2024-12-15 13:30:13.534050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.077 [2024-12-15 13:30:13.534094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.077 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.077 [2024-12-15 13:30:13.550601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.077 [2024-12-15 13:30:13.550656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.077 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.077 [2024-12-15 13:30:13.566798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.077 [2024-12-15 13:30:13.566841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.077 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.077 [2024-12-15 13:30:13.583159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.077 [2024-12-15 13:30:13.583202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.077 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.077 [2024-12-15 13:30:13.593889] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.077 [2024-12-15 13:30:13.593930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.077 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.077 [2024-12-15 13:30:13.609555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.077 [2024-12-15 13:30:13.609582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.077 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.077 [2024-12-15 13:30:13.620133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.077 [2024-12-15 13:30:13.620175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.077 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.077 [2024-12-15 13:30:13.627489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.077 [2024-12-15 13:30:13.627532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.077 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.077 [2024-12-15 13:30:13.639231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.077 [2024-12-15 13:30:13.639275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.077 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.077 [2024-12-15 13:30:13.647649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.077 [2024-12-15 13:30:13.647692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.077 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.077 [2024-12-15 13:30:13.662897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.077 [2024-12-15 13:30:13.662941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.077 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.077 [2024-12-15 13:30:13.680221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.077 [2024-12-15 13:30:13.680265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.077 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.077 [2024-12-15 13:30:13.696488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.077 [2024-12-15 13:30:13.696532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.077 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.077 [2024-12-15 13:30:13.712804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.077 [2024-12-15 13:30:13.712846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.077 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.077 [2024-12-15 13:30:13.729124] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.077 [2024-12-15 13:30:13.729167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.077 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.077 [2024-12-15 13:30:13.745786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.077 [2024-12-15 13:30:13.745845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.077 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.077 [2024-12-15 13:30:13.763039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.077 [2024-12-15 13:30:13.763083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.357 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.357 [2024-12-15 13:30:13.778851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.357 [2024-12-15 13:30:13.778893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.357 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.357 [2024-12-15 13:30:13.795651] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.357 [2024-12-15 13:30:13.795692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.357 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.357 [2024-12-15 13:30:13.806153] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.357 [2024-12-15 13:30:13.806195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.357 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.357 [2024-12-15 13:30:13.822051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.357 [2024-12-15 13:30:13.822094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.358 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.358 [2024-12-15 13:30:13.832924] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.358 [2024-12-15 13:30:13.832952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.358 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.358 [2024-12-15 13:30:13.841674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.358 [2024-12-15 13:30:13.841704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.358 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.358 [2024-12-15 13:30:13.850915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.358 [2024-12-15 13:30:13.850959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.358 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.358 [2024-12-15 13:30:13.859920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.358 [2024-12-15 13:30:13.859962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.358 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.358 [2024-12-15 13:30:13.874641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.358 [2024-12-15 13:30:13.874664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.358 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.358 [2024-12-15 13:30:13.886666] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.358 [2024-12-15 13:30:13.886689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.358 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.358 [2024-12-15 13:30:13.902088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.358 [2024-12-15 13:30:13.902131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.358 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.358 [2024-12-15 13:30:13.913648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.358 [2024-12-15 13:30:13.913670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.358 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.358 [2024-12-15 13:30:13.930396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.358 [2024-12-15 13:30:13.930440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.358 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.358 [2024-12-15 13:30:13.944898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.358 [2024-12-15 13:30:13.944941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.358 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.358 [2024-12-15 13:30:13.960827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.358 [2024-12-15 13:30:13.960872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.358 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.358 [2024-12-15 13:30:13.972078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.358 [2024-12-15 13:30:13.972121] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.358 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.358 [2024-12-15 13:30:13.988667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.358 [2024-12-15 13:30:13.988709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.358 2024/12/15 13:30:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.358 [2024-12-15 13:30:14.005038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.358 [2024-12-15 13:30:14.005082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.358 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.358 [2024-12-15 13:30:14.022065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.358 [2024-12-15 13:30:14.022110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.358 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.358 [2024-12-15 13:30:14.031797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.358 [2024-12-15 13:30:14.031845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.358 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.630 [2024-12-15 13:30:14.045592] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.630 [2024-12-15 13:30:14.045629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.630 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.630 [2024-12-15 13:30:14.060910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.630 [2024-12-15 13:30:14.060954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.630 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.630 [2024-12-15 13:30:14.072448] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.630 [2024-12-15 13:30:14.072490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.630 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.630 [2024-12-15 13:30:14.088694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.630 [2024-12-15 13:30:14.088721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.630 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.630 [2024-12-15 13:30:14.104619] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.630 [2024-12-15 13:30:14.104675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.630 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.630 [2024-12-15 13:30:14.120963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.630 [2024-12-15 13:30:14.121006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.630 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.630 [2024-12-15 13:30:14.138035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.630 [2024-12-15 13:30:14.138080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.630 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.630 [2024-12-15 13:30:14.153375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.630 [2024-12-15 13:30:14.153419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.631 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.631 [2024-12-15 13:30:14.164933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.631 [2024-12-15 13:30:14.164978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.631 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.631 [2024-12-15 13:30:14.181075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.631 [2024-12-15 13:30:14.181118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.631 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.631 [2024-12-15 13:30:14.198080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.631 [2024-12-15 13:30:14.198122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.631 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.631 [2024-12-15 13:30:14.207478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.631 [2024-12-15 13:30:14.207521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.631 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.631 [2024-12-15 13:30:14.221884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.631 [2024-12-15 13:30:14.221926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.631 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.631 [2024-12-15 13:30:14.230382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.631 [2024-12-15 13:30:14.230425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.631 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.631 [2024-12-15 13:30:14.246310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.631 [2024-12-15 13:30:14.246353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.631 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.631 [2024-12-15 13:30:14.255217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.631 [2024-12-15 13:30:14.255261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.631 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.631 [2024-12-15 13:30:14.269515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.631 [2024-12-15 13:30:14.269543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.631 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.631 [2024-12-15 13:30:14.284319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.631 [2024-12-15 13:30:14.284362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.631 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.631 [2024-12-15 13:30:14.295570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.631 [2024-12-15 13:30:14.295622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.631 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.631 [2024-12-15 13:30:14.311821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.631 [2024-12-15 13:30:14.311863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.631 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.890 [2024-12-15 13:30:14.327764] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.890 [2024-12-15 13:30:14.327805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.890 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.890 [2024-12-15 13:30:14.344637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.890 [2024-12-15 13:30:14.344678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.890 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.890 [2024-12-15 13:30:14.361677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.890 [2024-12-15 13:30:14.361704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.890 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.890 [2024-12-15 13:30:14.378464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.890 [2024-12-15 13:30:14.378508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.890 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.890 [2024-12-15 13:30:14.394901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.890 [2024-12-15 13:30:14.394945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.890 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.890 [2024-12-15 13:30:14.411803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.890 [2024-12-15 13:30:14.411846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.890 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.890 [2024-12-15 13:30:14.427529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.890 [2024-12-15 13:30:14.427573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.890 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.890 [2024-12-15 13:30:14.444389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.890 [2024-12-15 13:30:14.444432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.890 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.890 [2024-12-15 13:30:14.461147] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.890 [2024-12-15 13:30:14.461190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.890 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.890 [2024-12-15 13:30:14.478054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.890 [2024-12-15 13:30:14.478097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.890 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.890 [2024-12-15 13:30:14.493905] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.890 [2024-12-15 13:30:14.493933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.890 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.890 [2024-12-15 13:30:14.510850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.890 [2024-12-15 13:30:14.510893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.890 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.890 [2024-12-15 13:30:14.528544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.890 [2024-12-15 13:30:14.528587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.890 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.890 [2024-12-15 13:30:14.543936] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.890 [2024-12-15 13:30:14.543979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.890 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.890 [2024-12-15 13:30:14.560195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.890 [2024-12-15 13:30:14.560239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.890 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.890 [2024-12-15 13:30:14.571196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.890 [2024-12-15 13:30:14.571238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.890 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.149 [2024-12-15 13:30:14.587313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.149 [2024-12-15 13:30:14.587357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.149 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.149 [2024-12-15 13:30:14.604173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.149 [2024-12-15 13:30:14.604216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.149 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.149 [2024-12-15 13:30:14.620525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.149 [2024-12-15 13:30:14.620568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.149 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.149 [2024-12-15 13:30:14.637445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.149 [2024-12-15 13:30:14.637497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.149 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.149 [2024-12-15 13:30:14.654787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.149 [2024-12-15 13:30:14.654829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.149 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.149 [2024-12-15 13:30:14.669440] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.150 [2024-12-15 13:30:14.669507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.150 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.150 [2024-12-15 13:30:14.685116] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.150 [2024-12-15 13:30:14.685159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.150 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.150 [2024-12-15 13:30:14.702628] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.150 [2024-12-15 13:30:14.702671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.150 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.150 [2024-12-15 13:30:14.718550] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.150 [2024-12-15 13:30:14.718592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.150 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.150 [2024-12-15 13:30:14.735737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.150 [2024-12-15 13:30:14.735779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.150 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.150 [2024-12-15 13:30:14.751696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.150 [2024-12-15 13:30:14.751738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.150 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.150 [2024-12-15 13:30:14.769235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.150 [2024-12-15 13:30:14.769278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.150 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.150 [2024-12-15 13:30:14.786391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.150 [2024-12-15 13:30:14.786434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.150 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.150 [2024-12-15 13:30:14.803058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.150 [2024-12-15 13:30:14.803100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.150 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.150 [2024-12-15 13:30:14.819404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.150 [2024-12-15 13:30:14.819447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.150 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.150 [2024-12-15 13:30:14.836107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.150 [2024-12-15 13:30:14.836149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.409 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.409 [2024-12-15 13:30:14.852544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.409 [2024-12-15 13:30:14.852585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.409 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.409 [2024-12-15 13:30:14.869535] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.409 [2024-12-15 13:30:14.869563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.409 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.409 [2024-12-15 13:30:14.885413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.409 [2024-12-15 13:30:14.885456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.409 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.409 [2024-12-15 13:30:14.902852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.409 [2024-12-15 13:30:14.902896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.409 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.409 [2024-12-15 13:30:14.919800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.409 [2024-12-15 13:30:14.919842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.409 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.409 [2024-12-15 13:30:14.936334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.409 [2024-12-15 13:30:14.936375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.409 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.409 [2024-12-15 13:30:14.953119] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.409 [2024-12-15 13:30:14.953162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.409 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.409 [2024-12-15 13:30:14.970071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.409 [2024-12-15 13:30:14.970113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.409 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.409 [2024-12-15 13:30:14.986273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.409 [2024-12-15 13:30:14.986316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.409 2024/12/15 13:30:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.409 [2024-12-15 13:30:15.002843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.409 [2024-12-15 13:30:15.002887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.409 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.409 [2024-12-15 13:30:15.019475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.409 [2024-12-15 13:30:15.019518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.409 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.409 [2024-12-15 13:30:15.035852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.409 [2024-12-15 13:30:15.035882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.409 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.409 [2024-12-15 13:30:15.053071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.409 [2024-12-15 13:30:15.053115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.409 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.409 [2024-12-15 13:30:15.067332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.409 [2024-12-15 13:30:15.067375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.410 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.410 [2024-12-15 13:30:15.083802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.410 [2024-12-15 13:30:15.083845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.410 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.668 [2024-12-15 13:30:15.099295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.668 [2024-12-15 13:30:15.099347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.668 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.668 [2024-12-15 13:30:15.108419] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.668 [2024-12-15 13:30:15.108453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.668 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.668 [2024-12-15 13:30:15.125229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.668 [2024-12-15 13:30:15.125262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.668 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.668 [2024-12-15 13:30:15.141899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.668 [2024-12-15 13:30:15.141933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.668 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.668 [2024-12-15 13:30:15.156296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.668 [2024-12-15 13:30:15.156330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.668 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.668 [2024-12-15 13:30:15.171548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.668 [2024-12-15 13:30:15.171581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.668 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.668 [2024-12-15 13:30:15.181034] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.668 [2024-12-15 13:30:15.181224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.668 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.668 [2024-12-15 13:30:15.195182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.668 [2024-12-15 13:30:15.195216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.668 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.668 [2024-12-15 13:30:15.210938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.668 [2024-12-15 13:30:15.210986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.668 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.668 [2024-12-15 13:30:15.227835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.668 [2024-12-15 13:30:15.227866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.668 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.668 [2024-12-15 13:30:15.243653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.668 [2024-12-15 13:30:15.243684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.668 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.668 [2024-12-15 13:30:15.261021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.668 [2024-12-15 13:30:15.261055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.668 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.668 [2024-12-15 13:30:15.277571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.668 [2024-12-15 13:30:15.277636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.669 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.669 [2024-12-15 13:30:15.294181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.669 [2024-12-15 13:30:15.294213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.669 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.669 [2024-12-15 13:30:15.310810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.669 [2024-12-15 13:30:15.310841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.669 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.669 [2024-12-15 13:30:15.327274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.669 [2024-12-15 13:30:15.327306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.669 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.669 [2024-12-15 13:30:15.343722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.669 [2024-12-15 13:30:15.343752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.669 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.927 [2024-12-15 13:30:15.360649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.927 [2024-12-15 13:30:15.360682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.927 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.927 [2024-12-15 13:30:15.377053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.927 [2024-12-15 13:30:15.377085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.927 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.927 [2024-12-15 13:30:15.393515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.927 [2024-12-15 13:30:15.393550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.927 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.927 [2024-12-15 13:30:15.410869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.927 [2024-12-15 13:30:15.410901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.927 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.927 [2024-12-15 13:30:15.426667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.927 [2024-12-15 13:30:15.426699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.927 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.927 [2024-12-15 13:30:15.443462] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.927 [2024-12-15 13:30:15.443495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.927 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.927 [2024-12-15 13:30:15.460644] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.927 [2024-12-15 13:30:15.460676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.927 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.927 [2024-12-15 13:30:15.477124] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.927 [2024-12-15 13:30:15.477156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.927 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.927 [2024-12-15 13:30:15.494083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.927 [2024-12-15 13:30:15.494274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.927 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.927 [2024-12-15 13:30:15.510872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.927 [2024-12-15 13:30:15.510907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.927 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.927 [2024-12-15 13:30:15.525687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.927 [2024-12-15 13:30:15.525720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.927 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.927 [2024-12-15 13:30:15.541761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.927 [2024-12-15 13:30:15.541808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.927 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.927 [2024-12-15 13:30:15.557759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.927 [2024-12-15 13:30:15.557802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.927 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.927 [2024-12-15 13:30:15.575123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.927 [2024-12-15 13:30:15.575155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.927 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.927 [2024-12-15 13:30:15.590231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.927 [2024-12-15 13:30:15.590276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.927 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.927 [2024-12-15 13:30:15.605195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.927 [2024-12-15 13:30:15.605240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.927 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.186 [2024-12-15 13:30:15.622609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.186 [2024-12-15 13:30:15.622663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.186 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.186 [2024-12-15 13:30:15.637409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.186 [2024-12-15 13:30:15.637455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.186 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.186 [2024-12-15 13:30:15.646959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.186 [2024-12-15 13:30:15.647005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.186 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.186 [2024-12-15 13:30:15.658478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.186 [2024-12-15 13:30:15.658522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.186 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.186 [2024-12-15 13:30:15.675913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.186 [2024-12-15 13:30:15.675958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.186 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.186 [2024-12-15 13:30:15.691786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.186 [2024-12-15 13:30:15.691815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.186 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.186 [2024-12-15 13:30:15.708866] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.186 [2024-12-15 13:30:15.708912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.186 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.186 [2024-12-15 13:30:15.724534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.186 [2024-12-15 13:30:15.724581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.186 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.186 [2024-12-15 13:30:15.736850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.186 [2024-12-15 13:30:15.736895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.186 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.186 [2024-12-15 13:30:15.752252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.186 [2024-12-15 13:30:15.752297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.186 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.186 [2024-12-15 13:30:15.768497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.186 [2024-12-15 13:30:15.768543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.186 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.186 [2024-12-15 13:30:15.785116] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.186 [2024-12-15 13:30:15.785160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.186 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.186 [2024-12-15 13:30:15.800717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.186 [2024-12-15 13:30:15.800761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.186 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.186 [2024-12-15 13:30:15.817686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.186 [2024-12-15 13:30:15.817731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.186 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.186 [2024-12-15 13:30:15.834537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.186 [2024-12-15 13:30:15.834583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.186 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.186 [2024-12-15 13:30:15.850998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.186 [2024-12-15 13:30:15.851042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.186 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.186 [2024-12-15 13:30:15.867834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.186 [2024-12-15 13:30:15.867878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.186 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.445 [2024-12-15 13:30:15.883814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.445 [2024-12-15 13:30:15.883859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.445 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.445 [2024-12-15 13:30:15.901179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.445 [2024-12-15 13:30:15.901225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.445 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.445 [2024-12-15 13:30:15.918082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.445 [2024-12-15 13:30:15.918126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.445 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.445 [2024-12-15 13:30:15.934865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.445 [2024-12-15 13:30:15.934911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.445 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.445 [2024-12-15 13:30:15.950554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.446 [2024-12-15 13:30:15.950609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.446 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.446 [2024-12-15 13:30:15.961374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.446 [2024-12-15 13:30:15.961418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.446 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.446 [2024-12-15 13:30:15.976737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.446 [2024-12-15 13:30:15.976780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.446 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.446 [2024-12-15 13:30:15.993897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.446 [2024-12-15 13:30:15.993941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.446 2024/12/15 13:30:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.446 [2024-12-15 13:30:16.010294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.446 [2024-12-15 13:30:16.010338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.446 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.446 [2024-12-15 13:30:16.025676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.446 [2024-12-15 13:30:16.025722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.446 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.446 [2024-12-15 13:30:16.037392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.446 [2024-12-15 13:30:16.037438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.446 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.446 [2024-12-15 13:30:16.052897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.446 [2024-12-15 13:30:16.052926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.446 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.446 [2024-12-15 13:30:16.069470] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.446 [2024-12-15 13:30:16.069539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.446 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.446 [2024-12-15 13:30:16.086744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.446 [2024-12-15 13:30:16.086789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.446 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.446 [2024-12-15 13:30:16.103564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.446 [2024-12-15 13:30:16.103634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.446 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.446 [2024-12-15 13:30:16.119048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.446 [2024-12-15 13:30:16.119094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.446 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.446 [2024-12-15 13:30:16.128502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.446 [2024-12-15 13:30:16.128546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.446 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.705 [2024-12-15 13:30:16.143097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.705 [2024-12-15 13:30:16.143144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.705 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.705 [2024-12-15 13:30:16.157563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.705 [2024-12-15 13:30:16.157619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.705 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.705 [2024-12-15 13:30:16.172560] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.705 [2024-12-15 13:30:16.172633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.705 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.705 [2024-12-15 13:30:16.190561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.705 [2024-12-15 13:30:16.190632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.705 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.705 [2024-12-15 13:30:16.205274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.705 [2024-12-15 13:30:16.205319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.705 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.705 [2024-12-15 13:30:16.222162] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.705 [2024-12-15 13:30:16.222207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.705 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.705 [2024-12-15 13:30:16.237029] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.705 [2024-12-15 13:30:16.237075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.705 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.705 [2024-12-15 13:30:16.254719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.705 [2024-12-15 13:30:16.254763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.705 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.705 [2024-12-15 13:30:16.268534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.705 [2024-12-15 13:30:16.268577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.705 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.705 [2024-12-15 13:30:16.284053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.705 [2024-12-15 13:30:16.284113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.706 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.706 [2024-12-15 13:30:16.301228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.706 [2024-12-15 13:30:16.301273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.706 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.706 [2024-12-15 13:30:16.317708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.706 [2024-12-15 13:30:16.317737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.706 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.706 [2024-12-15 13:30:16.333502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.706 [2024-12-15 13:30:16.333548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.706 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.706 [2024-12-15 13:30:16.350739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.706 [2024-12-15 13:30:16.350784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.706 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.706 [2024-12-15 13:30:16.366352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.706 [2024-12-15 13:30:16.366395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.706 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.706 [2024-12-15 13:30:16.378238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.706 [2024-12-15 13:30:16.378281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.706 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.965 [2024-12-15 13:30:16.394582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.965 [2024-12-15 13:30:16.394658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.965 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.965 [2024-12-15 13:30:16.411498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.965 [2024-12-15 13:30:16.411543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.965 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.965 [2024-12-15 13:30:16.428136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.965 [2024-12-15 13:30:16.428181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.965 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.965 [2024-12-15 13:30:16.444831] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.965 [2024-12-15 13:30:16.444876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.965 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.965 [2024-12-15 13:30:16.461584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.965 [2024-12-15 13:30:16.461640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.965 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.965 [2024-12-15 13:30:16.477113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.965 [2024-12-15 13:30:16.477158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.965 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.965 [2024-12-15 13:30:16.488761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.965 [2024-12-15 13:30:16.488806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.965 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.965 [2024-12-15 13:30:16.504457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.965 [2024-12-15 13:30:16.504501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.965 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.965 [2024-12-15 13:30:16.520899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.965 [2024-12-15 13:30:16.520944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.965 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.965 [2024-12-15 13:30:16.537666] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.965 [2024-12-15 13:30:16.537712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.965 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.965 [2024-12-15 13:30:16.554213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.965 [2024-12-15 13:30:16.554258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.965 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.965 [2024-12-15 13:30:16.570923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.965 [2024-12-15 13:30:16.570969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.965 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.965 [2024-12-15 13:30:16.587211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.965 [2024-12-15 13:30:16.587255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.965 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.965 [2024-12-15 13:30:16.603300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.965 [2024-12-15 13:30:16.603344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.965 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.965 [2024-12-15 13:30:16.620204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.965 [2024-12-15 13:30:16.620250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.965 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.965 [2024-12-15 13:30:16.636610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.965 [2024-12-15 13:30:16.636681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.965 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.224 [2024-12-15 13:30:16.653516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.224 [2024-12-15 13:30:16.653567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.224 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.224 [2024-12-15 13:30:16.670802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.224 [2024-12-15 13:30:16.670846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.224 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.225 [2024-12-15 13:30:16.686761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.225 [2024-12-15 13:30:16.686805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.225 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.225 [2024-12-15 13:30:16.702805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.225 [2024-12-15 13:30:16.702850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.225 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.225 [2024-12-15 13:30:16.719774] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.225 [2024-12-15 13:30:16.719818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.225 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.225 [2024-12-15 13:30:16.736238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.225 [2024-12-15 13:30:16.736283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.225 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.225 [2024-12-15 13:30:16.752637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.225 [2024-12-15 13:30:16.752682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.225 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.225 [2024-12-15 13:30:16.769322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.225 [2024-12-15 13:30:16.769368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.225 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.225 [2024-12-15 13:30:16.785460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.225 [2024-12-15 13:30:16.785527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.225 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.225 [2024-12-15 13:30:16.802047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.225 [2024-12-15 13:30:16.802092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.225 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.225 [2024-12-15 13:30:16.818494] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.225 [2024-12-15 13:30:16.818539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.225 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.225 [2024-12-15 13:30:16.835525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.225 [2024-12-15 13:30:16.835572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.225 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.225 [2024-12-15 13:30:16.851972] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.225 [2024-12-15 13:30:16.852017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.225 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.225 [2024-12-15 13:30:16.868677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.225 [2024-12-15 13:30:16.868721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.225 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.225 [2024-12-15 13:30:16.885413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.225 [2024-12-15 13:30:16.885457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.225 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.225 [2024-12-15 13:30:16.902481] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.225 [2024-12-15 13:30:16.902526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.225 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.484 [2024-12-15 13:30:16.918045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.484 [2024-12-15 13:30:16.918089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.484 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.484 [2024-12-15 13:30:16.930253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.484 [2024-12-15 13:30:16.930297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.484 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.484 [2024-12-15 13:30:16.946164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.484 [2024-12-15 13:30:16.946210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.484 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.484 [2024-12-15 13:30:16.962095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.484 [2024-12-15 13:30:16.962140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.484 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.484 [2024-12-15 13:30:16.979698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.484 [2024-12-15 13:30:16.979742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.484 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.484 [2024-12-15 13:30:16.995404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.484 [2024-12-15 13:30:16.995449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.484 2024/12/15 13:30:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.484 [2024-12-15 13:30:17.007720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.484 [2024-12-15 13:30:17.007765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.484 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.484 [2024-12-15 13:30:17.022382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.484 [2024-12-15 13:30:17.022426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.484 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.484 [2024-12-15 13:30:17.037194] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.484 [2024-12-15 13:30:17.037239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.484 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.484 [2024-12-15 13:30:17.049520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.484 [2024-12-15 13:30:17.049564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.484 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.484 [2024-12-15 13:30:17.064667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.484 [2024-12-15 13:30:17.064711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.484 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.484 [2024-12-15 13:30:17.082135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.484 [2024-12-15 13:30:17.082180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.484 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.484 [2024-12-15 13:30:17.098078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.484 [2024-12-15 13:30:17.098123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.485 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.485 [2024-12-15 13:30:17.109661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.485 [2024-12-15 13:30:17.109706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.485 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.485 [2024-12-15 13:30:17.125817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.485 [2024-12-15 13:30:17.125861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.485 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.485 [2024-12-15 13:30:17.142078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.485 [2024-12-15 13:30:17.142122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.485 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.485 [2024-12-15 13:30:17.159099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.485 [2024-12-15 13:30:17.159143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.485 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.743 [2024-12-15 13:30:17.175758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-12-15 13:30:17.175802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.744 [2024-12-15 13:30:17.192490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-12-15 13:30:17.192535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.744 [2024-12-15 13:30:17.207188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-12-15 13:30:17.207234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.744 [2024-12-15 13:30:17.222796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-12-15 13:30:17.222842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.744 [2024-12-15 13:30:17.239479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-12-15 13:30:17.239524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.744 [2024-12-15 13:30:17.256351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-12-15 13:30:17.256397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.744 [2024-12-15 13:30:17.272912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-12-15 13:30:17.272940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.744 [2024-12-15 13:30:17.288897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-12-15 13:30:17.288942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.744 [2024-12-15 13:30:17.305645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-12-15 13:30:17.305690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.744 [2024-12-15 13:30:17.320957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-12-15 13:30:17.321004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.744 [2024-12-15 13:30:17.336227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-12-15 13:30:17.336271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.744 [2024-12-15 13:30:17.353971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-12-15 13:30:17.354000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.744 [2024-12-15 13:30:17.368073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-12-15 13:30:17.368117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.744 [2024-12-15 13:30:17.383833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-12-15 13:30:17.383864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.744 [2024-12-15 13:30:17.400285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-12-15 13:30:17.400331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:11.744 [2024-12-15 13:30:17.417963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.744 [2024-12-15 13:30:17.418008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.744 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.003 [2024-12-15 13:30:17.432915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.003 [2024-12-15 13:30:17.432945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.003 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.003 [2024-12-15 13:30:17.448710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.003 [2024-12-15 13:30:17.448753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.003 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.003 [2024-12-15 13:30:17.466215] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.003 [2024-12-15 13:30:17.466260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.003 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.003 [2024-12-15 13:30:17.482266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.003 [2024-12-15 13:30:17.482310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.003 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.004 [2024-12-15 13:30:17.498611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.004 [2024-12-15 13:30:17.498655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.004 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.004 [2024-12-15 13:30:17.515487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.004 [2024-12-15 13:30:17.515532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.004 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.004 00:16:12.004 Latency(us) 00:16:12.004 [2024-12-15T13:30:17.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.004 [2024-12-15T13:30:17.694Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:12.004 Nvme1n1 : 5.01 13628.12 106.47 0.00 0.00 9382.00 4021.53 20375.74 00:16:12.004 [2024-12-15T13:30:17.694Z] =================================================================================================================== 00:16:12.004 [2024-12-15T13:30:17.694Z] Total : 13628.12 106.47 0.00 0.00 9382.00 4021.53 20375.74 00:16:12.004 [2024-12-15 13:30:17.527064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.004 [2024-12-15 13:30:17.527107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.004 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.004 [2024-12-15 13:30:17.539076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.004 [2024-12-15 13:30:17.539118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.004 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.004 [2024-12-15 13:30:17.551062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.004 [2024-12-15 13:30:17.551107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.004 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.004 [2024-12-15 13:30:17.563091] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.004 [2024-12-15 13:30:17.563138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.004 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.004 [2024-12-15 13:30:17.575068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.004 [2024-12-15 13:30:17.575115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.004 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.004 [2024-12-15 13:30:17.587119] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.004 [2024-12-15 13:30:17.587166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.004 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.004 [2024-12-15 13:30:17.599122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.004 [2024-12-15 13:30:17.599169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.004 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.004 [2024-12-15 13:30:17.611125] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.004 [2024-12-15 13:30:17.611172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.004 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.004 [2024-12-15 13:30:17.623134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.004 [2024-12-15 13:30:17.623182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.004 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.004 [2024-12-15 13:30:17.635132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.004 [2024-12-15 13:30:17.635178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.004 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.004 [2024-12-15 13:30:17.647136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.004 [2024-12-15 13:30:17.647184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.004 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.004 [2024-12-15 13:30:17.659142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.004 [2024-12-15 13:30:17.659186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.004 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.004 [2024-12-15 13:30:17.671153] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.004 [2024-12-15 13:30:17.671199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.004 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.004 [2024-12-15 13:30:17.683160] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.004 [2024-12-15 13:30:17.683206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.004 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.263 [2024-12-15 13:30:17.695168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.263 [2024-12-15 13:30:17.695211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.263 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.263 [2024-12-15 13:30:17.707169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.263 [2024-12-15 13:30:17.707214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.263 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.263 [2024-12-15 13:30:17.719147] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.263 [2024-12-15 13:30:17.719188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.263 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.263 [2024-12-15 13:30:17.731128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.263 [2024-12-15 13:30:17.731164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.263 2024/12/15 13:30:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:12.263 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (86320) - No such process 00:16:12.263 13:30:17 -- target/zcopy.sh@49 -- # wait 86320 00:16:12.263 13:30:17 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:12.263 13:30:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.263 13:30:17 -- common/autotest_common.sh@10 -- # set +x 00:16:12.263 13:30:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.263 13:30:17 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:12.263 13:30:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.263 13:30:17 -- common/autotest_common.sh@10 -- # set +x 00:16:12.263 delay0 00:16:12.263 13:30:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.263 13:30:17 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:12.263 13:30:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.263 13:30:17 -- common/autotest_common.sh@10 -- # set +x 00:16:12.263 13:30:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.263 13:30:17 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:12.263 [2024-12-15 13:30:17.910462] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:18.826 Initializing NVMe Controllers 00:16:18.826 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:18.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:18.826 Initialization complete. Launching workers. 00:16:18.826 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 77 00:16:18.826 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 364, failed to submit 33 00:16:18.826 success 176, unsuccess 188, failed 0 00:16:18.826 13:30:23 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:18.826 13:30:23 -- target/zcopy.sh@60 -- # nvmftestfini 00:16:18.826 13:30:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:18.826 13:30:23 -- nvmf/common.sh@116 -- # sync 00:16:18.826 13:30:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:18.826 13:30:24 -- nvmf/common.sh@119 -- # set +e 00:16:18.826 13:30:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:18.826 13:30:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:18.826 rmmod nvme_tcp 00:16:18.826 rmmod nvme_fabrics 00:16:18.826 rmmod nvme_keyring 00:16:18.826 13:30:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:18.826 13:30:24 -- nvmf/common.sh@123 -- # set -e 00:16:18.826 13:30:24 -- nvmf/common.sh@124 -- # return 0 00:16:18.826 13:30:24 -- nvmf/common.sh@477 -- # '[' -n 86146 ']' 00:16:18.826 13:30:24 -- nvmf/common.sh@478 -- # killprocess 86146 00:16:18.826 13:30:24 -- common/autotest_common.sh@936 -- # '[' -z 86146 ']' 00:16:18.826 13:30:24 -- common/autotest_common.sh@940 -- # kill -0 86146 00:16:18.826 13:30:24 -- common/autotest_common.sh@941 -- # uname 00:16:18.826 13:30:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:18.826 13:30:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86146 00:16:18.826 killing process with pid 86146 00:16:18.826 13:30:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:18.826 13:30:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:18.826 13:30:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86146' 00:16:18.826 13:30:24 -- common/autotest_common.sh@955 -- # kill 86146 00:16:18.826 13:30:24 -- common/autotest_common.sh@960 -- # wait 86146 00:16:18.826 13:30:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:18.826 13:30:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:18.826 13:30:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:18.826 13:30:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:18.826 13:30:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:18.826 13:30:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.826 13:30:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:18.826 13:30:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.826 13:30:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:18.826 00:16:18.826 real 0m24.646s 00:16:18.826 user 0m39.753s 00:16:18.826 sys 0m6.587s 00:16:18.826 13:30:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:18.826 13:30:24 -- common/autotest_common.sh@10 -- # set +x 00:16:18.826 ************************************ 00:16:18.826 END TEST nvmf_zcopy 00:16:18.826 ************************************ 00:16:18.826 13:30:24 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:18.826 13:30:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:18.826 13:30:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:18.826 13:30:24 -- common/autotest_common.sh@10 -- # set +x 00:16:18.826 ************************************ 00:16:18.826 START TEST nvmf_nmic 00:16:18.826 ************************************ 00:16:18.826 13:30:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:18.826 * Looking for test storage... 00:16:18.826 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:18.826 13:30:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:18.826 13:30:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:18.826 13:30:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:19.085 13:30:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:19.085 13:30:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:19.085 13:30:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:19.085 13:30:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:19.085 13:30:24 -- scripts/common.sh@335 -- # IFS=.-: 00:16:19.085 13:30:24 -- scripts/common.sh@335 -- # read -ra ver1 00:16:19.085 13:30:24 -- scripts/common.sh@336 -- # IFS=.-: 00:16:19.085 13:30:24 -- scripts/common.sh@336 -- # read -ra ver2 00:16:19.085 13:30:24 -- scripts/common.sh@337 -- # local 'op=<' 00:16:19.085 13:30:24 -- scripts/common.sh@339 -- # ver1_l=2 00:16:19.085 13:30:24 -- scripts/common.sh@340 -- # ver2_l=1 00:16:19.085 13:30:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:19.085 13:30:24 -- scripts/common.sh@343 -- # case "$op" in 00:16:19.085 13:30:24 -- scripts/common.sh@344 -- # : 1 00:16:19.085 13:30:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:19.085 13:30:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:19.085 13:30:24 -- scripts/common.sh@364 -- # decimal 1 00:16:19.085 13:30:24 -- scripts/common.sh@352 -- # local d=1 00:16:19.085 13:30:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:19.085 13:30:24 -- scripts/common.sh@354 -- # echo 1 00:16:19.085 13:30:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:19.085 13:30:24 -- scripts/common.sh@365 -- # decimal 2 00:16:19.085 13:30:24 -- scripts/common.sh@352 -- # local d=2 00:16:19.085 13:30:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:19.085 13:30:24 -- scripts/common.sh@354 -- # echo 2 00:16:19.085 13:30:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:19.085 13:30:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:19.085 13:30:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:19.085 13:30:24 -- scripts/common.sh@367 -- # return 0 00:16:19.085 13:30:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:19.085 13:30:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:19.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.085 --rc genhtml_branch_coverage=1 00:16:19.085 --rc genhtml_function_coverage=1 00:16:19.085 --rc genhtml_legend=1 00:16:19.085 --rc geninfo_all_blocks=1 00:16:19.085 --rc geninfo_unexecuted_blocks=1 00:16:19.085 00:16:19.085 ' 00:16:19.085 13:30:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:19.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.085 --rc genhtml_branch_coverage=1 00:16:19.085 --rc genhtml_function_coverage=1 00:16:19.085 --rc genhtml_legend=1 00:16:19.085 --rc geninfo_all_blocks=1 00:16:19.085 --rc geninfo_unexecuted_blocks=1 00:16:19.085 00:16:19.085 ' 00:16:19.085 13:30:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:19.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.085 --rc genhtml_branch_coverage=1 00:16:19.085 --rc genhtml_function_coverage=1 00:16:19.085 --rc genhtml_legend=1 00:16:19.085 --rc geninfo_all_blocks=1 00:16:19.086 --rc geninfo_unexecuted_blocks=1 00:16:19.086 00:16:19.086 ' 00:16:19.086 13:30:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:19.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.086 --rc genhtml_branch_coverage=1 00:16:19.086 --rc genhtml_function_coverage=1 00:16:19.086 --rc genhtml_legend=1 00:16:19.086 --rc geninfo_all_blocks=1 00:16:19.086 --rc geninfo_unexecuted_blocks=1 00:16:19.086 00:16:19.086 ' 00:16:19.086 13:30:24 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:19.086 13:30:24 -- nvmf/common.sh@7 -- # uname -s 00:16:19.086 13:30:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:19.086 13:30:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:19.086 13:30:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:19.086 13:30:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:19.086 13:30:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:19.086 13:30:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:19.086 13:30:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:19.086 13:30:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:19.086 13:30:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:19.086 13:30:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:19.086 13:30:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:16:19.086 13:30:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:16:19.086 13:30:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:19.086 13:30:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:19.086 13:30:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:19.086 13:30:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:19.086 13:30:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.086 13:30:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.086 13:30:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.086 13:30:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.086 13:30:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.086 13:30:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.086 13:30:24 -- paths/export.sh@5 -- # export PATH 00:16:19.086 13:30:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.086 13:30:24 -- nvmf/common.sh@46 -- # : 0 00:16:19.086 13:30:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:19.086 13:30:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:19.086 13:30:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:19.086 13:30:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:19.086 13:30:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:19.086 13:30:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:19.086 13:30:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:19.086 13:30:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:19.086 13:30:24 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:19.086 13:30:24 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:19.086 13:30:24 -- target/nmic.sh@14 -- # nvmftestinit 00:16:19.086 13:30:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:19.086 13:30:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:19.086 13:30:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:19.086 13:30:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:19.086 13:30:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:19.086 13:30:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.086 13:30:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.086 13:30:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.086 13:30:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:19.086 13:30:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:19.086 13:30:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:19.086 13:30:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:19.086 13:30:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:19.086 13:30:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:19.086 13:30:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:19.086 13:30:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:19.086 13:30:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:19.086 13:30:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:19.086 13:30:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:19.086 13:30:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:19.086 13:30:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:19.086 13:30:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:19.086 13:30:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:19.086 13:30:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:19.086 13:30:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:19.086 13:30:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:19.086 13:30:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:19.086 13:30:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:19.086 Cannot find device "nvmf_tgt_br" 00:16:19.086 13:30:24 -- nvmf/common.sh@154 -- # true 00:16:19.086 13:30:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:19.086 Cannot find device "nvmf_tgt_br2" 00:16:19.086 13:30:24 -- nvmf/common.sh@155 -- # true 00:16:19.086 13:30:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:19.086 13:30:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:19.086 Cannot find device "nvmf_tgt_br" 00:16:19.086 13:30:24 -- nvmf/common.sh@157 -- # true 00:16:19.086 13:30:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:19.086 Cannot find device "nvmf_tgt_br2" 00:16:19.086 13:30:24 -- nvmf/common.sh@158 -- # true 00:16:19.086 13:30:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:19.086 13:30:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:19.086 13:30:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:19.086 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:19.086 13:30:24 -- nvmf/common.sh@161 -- # true 00:16:19.086 13:30:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:19.086 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:19.086 13:30:24 -- nvmf/common.sh@162 -- # true 00:16:19.086 13:30:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:19.086 13:30:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:19.345 13:30:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:19.345 13:30:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:19.345 13:30:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:19.345 13:30:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:19.345 13:30:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:19.345 13:30:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:19.345 13:30:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:19.345 13:30:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:19.345 13:30:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:19.345 13:30:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:19.345 13:30:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:19.345 13:30:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:19.345 13:30:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:19.345 13:30:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:19.345 13:30:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:19.345 13:30:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:19.345 13:30:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:19.345 13:30:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:19.345 13:30:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:19.345 13:30:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:19.345 13:30:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:19.345 13:30:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:19.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:19.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:16:19.345 00:16:19.345 --- 10.0.0.2 ping statistics --- 00:16:19.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.345 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:16:19.345 13:30:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:19.345 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:19.346 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:16:19.346 00:16:19.346 --- 10.0.0.3 ping statistics --- 00:16:19.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.346 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:19.346 13:30:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:19.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:19.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:16:19.346 00:16:19.346 --- 10.0.0.1 ping statistics --- 00:16:19.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.346 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:16:19.346 13:30:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:19.346 13:30:24 -- nvmf/common.sh@421 -- # return 0 00:16:19.346 13:30:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:19.346 13:30:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:19.346 13:30:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:19.346 13:30:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:19.346 13:30:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:19.346 13:30:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:19.346 13:30:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:19.346 13:30:24 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:19.346 13:30:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:19.346 13:30:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:19.346 13:30:24 -- common/autotest_common.sh@10 -- # set +x 00:16:19.346 13:30:24 -- nvmf/common.sh@469 -- # nvmfpid=86640 00:16:19.346 13:30:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:19.346 13:30:24 -- nvmf/common.sh@470 -- # waitforlisten 86640 00:16:19.346 13:30:24 -- common/autotest_common.sh@829 -- # '[' -z 86640 ']' 00:16:19.346 13:30:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.346 13:30:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:19.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.346 13:30:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.346 13:30:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:19.346 13:30:24 -- common/autotest_common.sh@10 -- # set +x 00:16:19.346 [2024-12-15 13:30:25.018381] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:19.346 [2024-12-15 13:30:25.018469] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:19.610 [2024-12-15 13:30:25.161297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:19.610 [2024-12-15 13:30:25.219270] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:19.610 [2024-12-15 13:30:25.219445] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:19.610 [2024-12-15 13:30:25.219458] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:19.610 [2024-12-15 13:30:25.219467] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:19.610 [2024-12-15 13:30:25.219628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.610 [2024-12-15 13:30:25.220233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:19.610 [2024-12-15 13:30:25.220385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:19.610 [2024-12-15 13:30:25.220391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.546 13:30:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:20.546 13:30:26 -- common/autotest_common.sh@862 -- # return 0 00:16:20.546 13:30:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:20.546 13:30:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:20.546 13:30:26 -- common/autotest_common.sh@10 -- # set +x 00:16:20.546 13:30:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:20.546 13:30:26 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:20.546 13:30:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.546 13:30:26 -- common/autotest_common.sh@10 -- # set +x 00:16:20.546 [2024-12-15 13:30:26.079873] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:20.546 13:30:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.546 13:30:26 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:20.546 13:30:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.546 13:30:26 -- common/autotest_common.sh@10 -- # set +x 00:16:20.546 Malloc0 00:16:20.546 13:30:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.546 13:30:26 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:20.546 13:30:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.546 13:30:26 -- common/autotest_common.sh@10 -- # set +x 00:16:20.546 13:30:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.546 13:30:26 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:20.546 13:30:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.546 13:30:26 -- common/autotest_common.sh@10 -- # set +x 00:16:20.546 13:30:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.546 13:30:26 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:20.546 13:30:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.546 13:30:26 -- common/autotest_common.sh@10 -- # set +x 00:16:20.546 [2024-12-15 13:30:26.159669] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:20.546 13:30:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.546 test case1: single bdev can't be used in multiple subsystems 00:16:20.546 13:30:26 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:20.546 13:30:26 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:20.546 13:30:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.546 13:30:26 -- common/autotest_common.sh@10 -- # set +x 00:16:20.546 13:30:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.546 13:30:26 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:20.546 13:30:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.546 13:30:26 -- common/autotest_common.sh@10 -- # set +x 00:16:20.546 13:30:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.546 13:30:26 -- target/nmic.sh@28 -- # nmic_status=0 00:16:20.546 13:30:26 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:20.547 13:30:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.547 13:30:26 -- common/autotest_common.sh@10 -- # set +x 00:16:20.547 [2024-12-15 13:30:26.183491] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:20.547 [2024-12-15 13:30:26.183539] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:20.547 [2024-12-15 13:30:26.183550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:20.547 2024/12/15 13:30:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:20.547 request: 00:16:20.547 { 00:16:20.547 "method": "nvmf_subsystem_add_ns", 00:16:20.547 "params": { 00:16:20.547 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:20.547 "namespace": { 00:16:20.547 "bdev_name": "Malloc0" 00:16:20.547 } 00:16:20.547 } 00:16:20.547 } 00:16:20.547 Got JSON-RPC error response 00:16:20.547 GoRPCClient: error on JSON-RPC call 00:16:20.547 13:30:26 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:20.547 13:30:26 -- target/nmic.sh@29 -- # nmic_status=1 00:16:20.547 13:30:26 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:20.547 Adding namespace failed - expected result. 00:16:20.547 13:30:26 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:20.547 test case2: host connect to nvmf target in multiple paths 00:16:20.547 13:30:26 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:20.547 13:30:26 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:20.547 13:30:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.547 13:30:26 -- common/autotest_common.sh@10 -- # set +x 00:16:20.547 [2024-12-15 13:30:26.195648] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:20.547 13:30:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.547 13:30:26 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:20.805 13:30:26 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:21.062 13:30:26 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:21.062 13:30:26 -- common/autotest_common.sh@1187 -- # local i=0 00:16:21.062 13:30:26 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:21.062 13:30:26 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:16:21.062 13:30:26 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:22.964 13:30:28 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:22.964 13:30:28 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:22.964 13:30:28 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:22.964 13:30:28 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:16:22.964 13:30:28 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:22.964 13:30:28 -- common/autotest_common.sh@1197 -- # return 0 00:16:22.964 13:30:28 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:22.964 [global] 00:16:22.964 thread=1 00:16:22.964 invalidate=1 00:16:22.964 rw=write 00:16:22.964 time_based=1 00:16:22.964 runtime=1 00:16:22.964 ioengine=libaio 00:16:22.964 direct=1 00:16:22.964 bs=4096 00:16:22.964 iodepth=1 00:16:22.964 norandommap=0 00:16:22.964 numjobs=1 00:16:22.964 00:16:22.964 verify_dump=1 00:16:22.964 verify_backlog=512 00:16:22.964 verify_state_save=0 00:16:22.964 do_verify=1 00:16:22.964 verify=crc32c-intel 00:16:22.964 [job0] 00:16:22.964 filename=/dev/nvme0n1 00:16:22.964 Could not set queue depth (nvme0n1) 00:16:23.222 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:23.222 fio-3.35 00:16:23.222 Starting 1 thread 00:16:24.597 00:16:24.597 job0: (groupid=0, jobs=1): err= 0: pid=86750: Sun Dec 15 13:30:29 2024 00:16:24.597 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:16:24.597 slat (nsec): min=12288, max=59933, avg=14827.14, stdev=4089.20 00:16:24.597 clat (usec): min=111, max=192, avg=133.04, stdev=14.78 00:16:24.597 lat (usec): min=124, max=222, avg=147.86, stdev=15.73 00:16:24.597 clat percentiles (usec): 00:16:24.597 | 1.00th=[ 116], 5.00th=[ 118], 10.00th=[ 120], 20.00th=[ 122], 00:16:24.597 | 30.00th=[ 124], 40.00th=[ 126], 50.00th=[ 128], 60.00th=[ 133], 00:16:24.597 | 70.00th=[ 139], 80.00th=[ 147], 90.00th=[ 155], 95.00th=[ 163], 00:16:24.597 | 99.00th=[ 178], 99.50th=[ 182], 99.90th=[ 190], 99.95th=[ 192], 00:16:24.597 | 99.99th=[ 192] 00:16:24.597 write: IOPS=3802, BW=14.9MiB/s (15.6MB/s)(14.9MiB/1001msec); 0 zone resets 00:16:24.597 slat (usec): min=18, max=130, avg=22.60, stdev= 6.82 00:16:24.597 clat (usec): min=80, max=191, avg=97.93, stdev=12.91 00:16:24.597 lat (usec): min=99, max=255, avg=120.53, stdev=15.63 00:16:24.597 clat percentiles (usec): 00:16:24.597 | 1.00th=[ 83], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 89], 00:16:24.597 | 30.00th=[ 90], 40.00th=[ 92], 50.00th=[ 94], 60.00th=[ 96], 00:16:24.597 | 70.00th=[ 101], 80.00th=[ 108], 90.00th=[ 118], 95.00th=[ 124], 00:16:24.597 | 99.00th=[ 141], 99.50th=[ 145], 99.90th=[ 161], 99.95th=[ 174], 00:16:24.597 | 99.99th=[ 192] 00:16:24.597 bw ( KiB/s): min=16384, max=16384, per=100.00%, avg=16384.00, stdev= 0.00, samples=1 00:16:24.597 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:16:24.597 lat (usec) : 100=35.81%, 250=64.19% 00:16:24.597 cpu : usr=2.60%, sys=10.20%, ctx=7390, majf=0, minf=5 00:16:24.597 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:24.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.597 issued rwts: total=3584,3806,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.597 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:24.597 00:16:24.597 Run status group 0 (all jobs): 00:16:24.597 READ: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:16:24.597 WRITE: bw=14.9MiB/s (15.6MB/s), 14.9MiB/s-14.9MiB/s (15.6MB/s-15.6MB/s), io=14.9MiB (15.6MB), run=1001-1001msec 00:16:24.597 00:16:24.597 Disk stats (read/write): 00:16:24.597 nvme0n1: ios=3125/3584, merge=0/0, ticks=465/417, in_queue=882, util=91.38% 00:16:24.597 13:30:29 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:24.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:24.597 13:30:29 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:24.597 13:30:29 -- common/autotest_common.sh@1208 -- # local i=0 00:16:24.597 13:30:29 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:24.597 13:30:29 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:24.597 13:30:29 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:24.597 13:30:29 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:24.597 13:30:29 -- common/autotest_common.sh@1220 -- # return 0 00:16:24.597 13:30:29 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:24.597 13:30:29 -- target/nmic.sh@53 -- # nvmftestfini 00:16:24.597 13:30:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:24.597 13:30:29 -- nvmf/common.sh@116 -- # sync 00:16:24.597 13:30:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:24.597 13:30:30 -- nvmf/common.sh@119 -- # set +e 00:16:24.597 13:30:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:24.597 13:30:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:24.597 rmmod nvme_tcp 00:16:24.597 rmmod nvme_fabrics 00:16:24.597 rmmod nvme_keyring 00:16:24.598 13:30:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:24.598 13:30:30 -- nvmf/common.sh@123 -- # set -e 00:16:24.598 13:30:30 -- nvmf/common.sh@124 -- # return 0 00:16:24.598 13:30:30 -- nvmf/common.sh@477 -- # '[' -n 86640 ']' 00:16:24.598 13:30:30 -- nvmf/common.sh@478 -- # killprocess 86640 00:16:24.598 13:30:30 -- common/autotest_common.sh@936 -- # '[' -z 86640 ']' 00:16:24.598 13:30:30 -- common/autotest_common.sh@940 -- # kill -0 86640 00:16:24.598 13:30:30 -- common/autotest_common.sh@941 -- # uname 00:16:24.598 13:30:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:24.598 13:30:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86640 00:16:24.598 13:30:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:24.598 13:30:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:24.598 13:30:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86640' 00:16:24.598 killing process with pid 86640 00:16:24.598 13:30:30 -- common/autotest_common.sh@955 -- # kill 86640 00:16:24.598 13:30:30 -- common/autotest_common.sh@960 -- # wait 86640 00:16:24.856 13:30:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:24.856 13:30:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:24.856 13:30:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:24.856 13:30:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:24.856 13:30:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:24.856 13:30:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.856 13:30:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.856 13:30:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.856 13:30:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:24.856 00:16:24.856 real 0m5.953s 00:16:24.856 user 0m20.069s 00:16:24.856 sys 0m1.393s 00:16:24.856 13:30:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:24.856 13:30:30 -- common/autotest_common.sh@10 -- # set +x 00:16:24.856 ************************************ 00:16:24.856 END TEST nvmf_nmic 00:16:24.856 ************************************ 00:16:24.856 13:30:30 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:24.856 13:30:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:24.856 13:30:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:24.856 13:30:30 -- common/autotest_common.sh@10 -- # set +x 00:16:24.856 ************************************ 00:16:24.856 START TEST nvmf_fio_target 00:16:24.856 ************************************ 00:16:24.856 13:30:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:24.856 * Looking for test storage... 00:16:24.856 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:24.856 13:30:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:24.856 13:30:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:24.856 13:30:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:25.116 13:30:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:25.116 13:30:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:25.116 13:30:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:25.116 13:30:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:25.116 13:30:30 -- scripts/common.sh@335 -- # IFS=.-: 00:16:25.116 13:30:30 -- scripts/common.sh@335 -- # read -ra ver1 00:16:25.116 13:30:30 -- scripts/common.sh@336 -- # IFS=.-: 00:16:25.116 13:30:30 -- scripts/common.sh@336 -- # read -ra ver2 00:16:25.116 13:30:30 -- scripts/common.sh@337 -- # local 'op=<' 00:16:25.116 13:30:30 -- scripts/common.sh@339 -- # ver1_l=2 00:16:25.116 13:30:30 -- scripts/common.sh@340 -- # ver2_l=1 00:16:25.116 13:30:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:25.116 13:30:30 -- scripts/common.sh@343 -- # case "$op" in 00:16:25.116 13:30:30 -- scripts/common.sh@344 -- # : 1 00:16:25.116 13:30:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:25.116 13:30:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:25.116 13:30:30 -- scripts/common.sh@364 -- # decimal 1 00:16:25.116 13:30:30 -- scripts/common.sh@352 -- # local d=1 00:16:25.116 13:30:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:25.116 13:30:30 -- scripts/common.sh@354 -- # echo 1 00:16:25.116 13:30:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:25.116 13:30:30 -- scripts/common.sh@365 -- # decimal 2 00:16:25.116 13:30:30 -- scripts/common.sh@352 -- # local d=2 00:16:25.116 13:30:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:25.116 13:30:30 -- scripts/common.sh@354 -- # echo 2 00:16:25.116 13:30:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:25.116 13:30:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:25.116 13:30:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:25.116 13:30:30 -- scripts/common.sh@367 -- # return 0 00:16:25.116 13:30:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:25.116 13:30:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:25.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.116 --rc genhtml_branch_coverage=1 00:16:25.116 --rc genhtml_function_coverage=1 00:16:25.116 --rc genhtml_legend=1 00:16:25.116 --rc geninfo_all_blocks=1 00:16:25.116 --rc geninfo_unexecuted_blocks=1 00:16:25.116 00:16:25.116 ' 00:16:25.116 13:30:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:25.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.116 --rc genhtml_branch_coverage=1 00:16:25.116 --rc genhtml_function_coverage=1 00:16:25.116 --rc genhtml_legend=1 00:16:25.116 --rc geninfo_all_blocks=1 00:16:25.116 --rc geninfo_unexecuted_blocks=1 00:16:25.116 00:16:25.116 ' 00:16:25.116 13:30:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:25.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.116 --rc genhtml_branch_coverage=1 00:16:25.116 --rc genhtml_function_coverage=1 00:16:25.116 --rc genhtml_legend=1 00:16:25.116 --rc geninfo_all_blocks=1 00:16:25.116 --rc geninfo_unexecuted_blocks=1 00:16:25.116 00:16:25.116 ' 00:16:25.116 13:30:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:25.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.116 --rc genhtml_branch_coverage=1 00:16:25.116 --rc genhtml_function_coverage=1 00:16:25.116 --rc genhtml_legend=1 00:16:25.116 --rc geninfo_all_blocks=1 00:16:25.116 --rc geninfo_unexecuted_blocks=1 00:16:25.116 00:16:25.116 ' 00:16:25.116 13:30:30 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:25.116 13:30:30 -- nvmf/common.sh@7 -- # uname -s 00:16:25.116 13:30:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.116 13:30:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.116 13:30:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.116 13:30:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.116 13:30:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.116 13:30:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.116 13:30:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.116 13:30:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.116 13:30:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.116 13:30:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.116 13:30:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:16:25.116 13:30:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:16:25.116 13:30:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.116 13:30:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.116 13:30:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:25.116 13:30:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:25.116 13:30:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.116 13:30:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.116 13:30:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.116 13:30:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.116 13:30:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.117 13:30:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.117 13:30:30 -- paths/export.sh@5 -- # export PATH 00:16:25.117 13:30:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.117 13:30:30 -- nvmf/common.sh@46 -- # : 0 00:16:25.117 13:30:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:25.117 13:30:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:25.117 13:30:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:25.117 13:30:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.117 13:30:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.117 13:30:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:25.117 13:30:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:25.117 13:30:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:25.117 13:30:30 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:25.117 13:30:30 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:25.117 13:30:30 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:25.117 13:30:30 -- target/fio.sh@16 -- # nvmftestinit 00:16:25.117 13:30:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:25.117 13:30:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:25.117 13:30:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:25.117 13:30:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:25.117 13:30:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:25.117 13:30:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.117 13:30:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:25.117 13:30:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.117 13:30:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:25.117 13:30:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:25.117 13:30:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:25.117 13:30:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:25.117 13:30:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:25.117 13:30:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:25.117 13:30:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:25.117 13:30:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:25.117 13:30:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:25.117 13:30:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:25.117 13:30:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:25.117 13:30:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:25.117 13:30:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:25.117 13:30:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:25.117 13:30:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:25.117 13:30:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:25.117 13:30:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:25.117 13:30:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:25.117 13:30:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:25.117 13:30:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:25.117 Cannot find device "nvmf_tgt_br" 00:16:25.117 13:30:30 -- nvmf/common.sh@154 -- # true 00:16:25.117 13:30:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:25.117 Cannot find device "nvmf_tgt_br2" 00:16:25.117 13:30:30 -- nvmf/common.sh@155 -- # true 00:16:25.117 13:30:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:25.117 13:30:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:25.117 Cannot find device "nvmf_tgt_br" 00:16:25.117 13:30:30 -- nvmf/common.sh@157 -- # true 00:16:25.117 13:30:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:25.117 Cannot find device "nvmf_tgt_br2" 00:16:25.117 13:30:30 -- nvmf/common.sh@158 -- # true 00:16:25.117 13:30:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:25.117 13:30:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:25.117 13:30:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:25.117 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:25.117 13:30:30 -- nvmf/common.sh@161 -- # true 00:16:25.117 13:30:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:25.117 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:25.117 13:30:30 -- nvmf/common.sh@162 -- # true 00:16:25.117 13:30:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:25.117 13:30:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:25.117 13:30:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:25.117 13:30:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:25.117 13:30:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:25.117 13:30:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:25.117 13:30:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:25.117 13:30:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:25.376 13:30:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:25.376 13:30:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:25.376 13:30:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:25.376 13:30:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:25.376 13:30:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:25.376 13:30:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:25.376 13:30:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:25.376 13:30:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:25.376 13:30:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:25.376 13:30:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:25.376 13:30:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:25.376 13:30:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:25.376 13:30:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:25.376 13:30:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:25.376 13:30:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:25.376 13:30:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:25.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:25.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:16:25.376 00:16:25.376 --- 10.0.0.2 ping statistics --- 00:16:25.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.376 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:16:25.376 13:30:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:25.376 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:25.376 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.027 ms 00:16:25.376 00:16:25.376 --- 10.0.0.3 ping statistics --- 00:16:25.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.376 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:16:25.376 13:30:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:25.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:25.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:25.376 00:16:25.376 --- 10.0.0.1 ping statistics --- 00:16:25.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.376 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:25.376 13:30:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:25.376 13:30:30 -- nvmf/common.sh@421 -- # return 0 00:16:25.376 13:30:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:25.376 13:30:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:25.376 13:30:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:25.376 13:30:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:25.376 13:30:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:25.376 13:30:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:25.376 13:30:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:25.376 13:30:30 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:25.376 13:30:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:25.376 13:30:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:25.376 13:30:30 -- common/autotest_common.sh@10 -- # set +x 00:16:25.376 13:30:30 -- nvmf/common.sh@469 -- # nvmfpid=86941 00:16:25.376 13:30:30 -- nvmf/common.sh@470 -- # waitforlisten 86941 00:16:25.376 13:30:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:25.376 13:30:30 -- common/autotest_common.sh@829 -- # '[' -z 86941 ']' 00:16:25.376 13:30:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.376 13:30:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:25.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.376 13:30:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.376 13:30:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:25.376 13:30:30 -- common/autotest_common.sh@10 -- # set +x 00:16:25.376 [2024-12-15 13:30:30.982082] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:25.376 [2024-12-15 13:30:30.982177] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.635 [2024-12-15 13:30:31.123795] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:25.635 [2024-12-15 13:30:31.188620] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:25.635 [2024-12-15 13:30:31.188748] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.635 [2024-12-15 13:30:31.188760] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.635 [2024-12-15 13:30:31.188767] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.635 [2024-12-15 13:30:31.188913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.635 [2024-12-15 13:30:31.189382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:25.635 [2024-12-15 13:30:31.189989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:25.635 [2024-12-15 13:30:31.190001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.571 13:30:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:26.571 13:30:31 -- common/autotest_common.sh@862 -- # return 0 00:16:26.571 13:30:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:26.571 13:30:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:26.571 13:30:31 -- common/autotest_common.sh@10 -- # set +x 00:16:26.571 13:30:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:26.571 13:30:31 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:26.571 [2024-12-15 13:30:32.252161] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:26.830 13:30:32 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:27.088 13:30:32 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:27.088 13:30:32 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:27.347 13:30:32 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:27.347 13:30:32 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:27.605 13:30:33 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:27.605 13:30:33 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:27.864 13:30:33 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:27.864 13:30:33 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:28.122 13:30:33 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:28.381 13:30:34 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:28.381 13:30:34 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:28.639 13:30:34 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:28.639 13:30:34 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:28.898 13:30:34 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:28.898 13:30:34 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:29.156 13:30:34 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:29.415 13:30:34 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:29.415 13:30:34 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:29.672 13:30:35 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:29.672 13:30:35 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:29.930 13:30:35 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:30.188 [2024-12-15 13:30:35.674521] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:30.188 13:30:35 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:30.447 13:30:35 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:30.447 13:30:36 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:30.706 13:30:36 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:30.706 13:30:36 -- common/autotest_common.sh@1187 -- # local i=0 00:16:30.706 13:30:36 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:30.706 13:30:36 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:16:30.706 13:30:36 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:16:30.706 13:30:36 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:32.611 13:30:38 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:32.611 13:30:38 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:32.611 13:30:38 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:32.870 13:30:38 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:16:32.870 13:30:38 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:32.870 13:30:38 -- common/autotest_common.sh@1197 -- # return 0 00:16:32.870 13:30:38 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:32.870 [global] 00:16:32.870 thread=1 00:16:32.870 invalidate=1 00:16:32.870 rw=write 00:16:32.870 time_based=1 00:16:32.870 runtime=1 00:16:32.870 ioengine=libaio 00:16:32.870 direct=1 00:16:32.870 bs=4096 00:16:32.870 iodepth=1 00:16:32.870 norandommap=0 00:16:32.870 numjobs=1 00:16:32.870 00:16:32.870 verify_dump=1 00:16:32.870 verify_backlog=512 00:16:32.870 verify_state_save=0 00:16:32.870 do_verify=1 00:16:32.870 verify=crc32c-intel 00:16:32.870 [job0] 00:16:32.870 filename=/dev/nvme0n1 00:16:32.870 [job1] 00:16:32.870 filename=/dev/nvme0n2 00:16:32.870 [job2] 00:16:32.870 filename=/dev/nvme0n3 00:16:32.870 [job3] 00:16:32.870 filename=/dev/nvme0n4 00:16:32.870 Could not set queue depth (nvme0n1) 00:16:32.870 Could not set queue depth (nvme0n2) 00:16:32.870 Could not set queue depth (nvme0n3) 00:16:32.870 Could not set queue depth (nvme0n4) 00:16:32.870 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:32.870 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:32.870 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:32.870 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:32.870 fio-3.35 00:16:32.870 Starting 4 threads 00:16:34.272 00:16:34.272 job0: (groupid=0, jobs=1): err= 0: pid=87233: Sun Dec 15 13:30:39 2024 00:16:34.272 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:16:34.272 slat (nsec): min=12822, max=51226, avg=15308.45, stdev=2967.83 00:16:34.272 clat (usec): min=121, max=196, avg=147.62, stdev=12.36 00:16:34.272 lat (usec): min=135, max=210, avg=162.93, stdev=12.75 00:16:34.272 clat percentiles (usec): 00:16:34.272 | 1.00th=[ 127], 5.00th=[ 131], 10.00th=[ 135], 20.00th=[ 139], 00:16:34.272 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 149], 00:16:34.272 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 165], 95.00th=[ 174], 00:16:34.272 | 99.00th=[ 186], 99.50th=[ 188], 99.90th=[ 194], 99.95th=[ 194], 00:16:34.272 | 99.99th=[ 196] 00:16:34.272 write: IOPS=3458, BW=13.5MiB/s (14.2MB/s)(13.5MiB/1001msec); 0 zone resets 00:16:34.272 slat (nsec): min=18955, max=93497, avg=22933.18, stdev=5137.89 00:16:34.272 clat (usec): min=91, max=2392, avg=118.36, stdev=41.17 00:16:34.272 lat (usec): min=111, max=2413, avg=141.29, stdev=41.54 00:16:34.272 clat percentiles (usec): 00:16:34.272 | 1.00th=[ 98], 5.00th=[ 102], 10.00th=[ 105], 20.00th=[ 110], 00:16:34.272 | 30.00th=[ 112], 40.00th=[ 114], 50.00th=[ 117], 60.00th=[ 119], 00:16:34.272 | 70.00th=[ 122], 80.00th=[ 126], 90.00th=[ 133], 95.00th=[ 141], 00:16:34.272 | 99.00th=[ 153], 99.50th=[ 157], 99.90th=[ 176], 99.95th=[ 570], 00:16:34.272 | 99.99th=[ 2409] 00:16:34.272 bw ( KiB/s): min=13520, max=13520, per=31.83%, avg=13520.00, stdev= 0.00, samples=1 00:16:34.272 iops : min= 3380, max= 3380, avg=3380.00, stdev= 0.00, samples=1 00:16:34.272 lat (usec) : 100=1.32%, 250=98.64%, 500=0.02%, 750=0.02% 00:16:34.272 lat (msec) : 4=0.02% 00:16:34.272 cpu : usr=2.30%, sys=9.40%, ctx=6538, majf=0, minf=7 00:16:34.272 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:34.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.272 issued rwts: total=3072,3462,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.272 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:34.272 job1: (groupid=0, jobs=1): err= 0: pid=87234: Sun Dec 15 13:30:39 2024 00:16:34.272 read: IOPS=1819, BW=7277KiB/s (7451kB/s)(7284KiB/1001msec) 00:16:34.272 slat (nsec): min=10605, max=42936, avg=12180.72, stdev=2806.97 00:16:34.272 clat (usec): min=227, max=573, avg=265.97, stdev=21.05 00:16:34.272 lat (usec): min=239, max=585, avg=278.15, stdev=21.33 00:16:34.272 clat percentiles (usec): 00:16:34.272 | 1.00th=[ 235], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 251], 00:16:34.272 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:16:34.272 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 297], 00:16:34.272 | 99.00th=[ 318], 99.50th=[ 330], 99.90th=[ 562], 99.95th=[ 570], 00:16:34.272 | 99.99th=[ 570] 00:16:34.272 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:34.272 slat (nsec): min=16011, max=87437, avg=22486.79, stdev=5442.17 00:16:34.272 clat (usec): min=101, max=771, avg=215.53, stdev=28.11 00:16:34.272 lat (usec): min=133, max=797, avg=238.01, stdev=28.40 00:16:34.272 clat percentiles (usec): 00:16:34.272 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 200], 00:16:34.272 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 219], 00:16:34.272 | 70.00th=[ 223], 80.00th=[ 229], 90.00th=[ 239], 95.00th=[ 247], 00:16:34.272 | 99.00th=[ 269], 99.50th=[ 388], 99.90th=[ 502], 99.95th=[ 510], 00:16:34.272 | 99.99th=[ 775] 00:16:34.272 bw ( KiB/s): min= 8192, max= 8192, per=19.29%, avg=8192.00, stdev= 0.00, samples=1 00:16:34.272 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:34.272 lat (usec) : 250=59.58%, 500=40.24%, 750=0.16%, 1000=0.03% 00:16:34.272 cpu : usr=0.90%, sys=5.90%, ctx=3872, majf=0, minf=13 00:16:34.272 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:34.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.272 issued rwts: total=1821,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.272 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:34.272 job2: (groupid=0, jobs=1): err= 0: pid=87235: Sun Dec 15 13:30:39 2024 00:16:34.272 read: IOPS=1820, BW=7281KiB/s (7455kB/s)(7288KiB/1001msec) 00:16:34.272 slat (nsec): min=11114, max=45275, avg=13928.14, stdev=2958.88 00:16:34.272 clat (usec): min=174, max=591, avg=264.11, stdev=21.09 00:16:34.272 lat (usec): min=195, max=605, avg=278.04, stdev=21.16 00:16:34.272 clat percentiles (usec): 00:16:34.272 | 1.00th=[ 233], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 249], 00:16:34.272 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 265], 00:16:34.272 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 297], 00:16:34.272 | 99.00th=[ 310], 99.50th=[ 322], 99.90th=[ 570], 99.95th=[ 594], 00:16:34.272 | 99.99th=[ 594] 00:16:34.272 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:34.272 slat (nsec): min=15623, max=73328, avg=22571.85, stdev=5341.35 00:16:34.272 clat (usec): min=144, max=741, avg=215.47, stdev=25.10 00:16:34.272 lat (usec): min=175, max=766, avg=238.05, stdev=25.40 00:16:34.272 clat percentiles (usec): 00:16:34.272 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 200], 00:16:34.272 | 30.00th=[ 206], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 219], 00:16:34.272 | 70.00th=[ 223], 80.00th=[ 229], 90.00th=[ 237], 95.00th=[ 245], 00:16:34.272 | 99.00th=[ 269], 99.50th=[ 338], 99.90th=[ 494], 99.95th=[ 553], 00:16:34.272 | 99.99th=[ 742] 00:16:34.272 bw ( KiB/s): min= 8192, max= 8192, per=19.29%, avg=8192.00, stdev= 0.00, samples=1 00:16:34.272 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:34.272 lat (usec) : 250=61.63%, 500=38.24%, 750=0.13% 00:16:34.272 cpu : usr=1.20%, sys=5.60%, ctx=3870, majf=0, minf=7 00:16:34.272 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:34.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.272 issued rwts: total=1822,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.272 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:34.272 job3: (groupid=0, jobs=1): err= 0: pid=87236: Sun Dec 15 13:30:39 2024 00:16:34.272 read: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(11.9MiB/1000msec) 00:16:34.272 slat (nsec): min=12789, max=60481, avg=14943.49, stdev=3091.30 00:16:34.272 clat (usec): min=133, max=462, avg=160.07, stdev=14.09 00:16:34.272 lat (usec): min=146, max=476, avg=175.01, stdev=14.55 00:16:34.272 clat percentiles (usec): 00:16:34.272 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 149], 00:16:34.272 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 161], 00:16:34.272 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 180], 95.00th=[ 186], 00:16:34.272 | 99.00th=[ 198], 99.50th=[ 202], 99.90th=[ 210], 99.95th=[ 215], 00:16:34.272 | 99.99th=[ 461] 00:16:34.272 write: IOPS=3072, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1000msec); 0 zone resets 00:16:34.272 slat (nsec): min=18437, max=70174, avg=22402.27, stdev=4733.39 00:16:34.272 clat (usec): min=100, max=625, avg=126.89, stdev=15.11 00:16:34.272 lat (usec): min=120, max=644, avg=149.29, stdev=16.17 00:16:34.272 clat percentiles (usec): 00:16:34.272 | 1.00th=[ 108], 5.00th=[ 112], 10.00th=[ 114], 20.00th=[ 117], 00:16:34.272 | 30.00th=[ 120], 40.00th=[ 123], 50.00th=[ 125], 60.00th=[ 128], 00:16:34.272 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 143], 95.00th=[ 151], 00:16:34.272 | 99.00th=[ 165], 99.50th=[ 169], 99.90th=[ 184], 99.95th=[ 210], 00:16:34.272 | 99.99th=[ 627] 00:16:34.272 bw ( KiB/s): min=12288, max=12288, per=28.93%, avg=12288.00, stdev= 0.00, samples=1 00:16:34.272 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:34.272 lat (usec) : 250=99.97%, 500=0.02%, 750=0.02% 00:16:34.272 cpu : usr=2.60%, sys=8.10%, ctx=6110, majf=0, minf=12 00:16:34.272 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:34.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.273 issued rwts: total=3038,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.273 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:34.273 00:16:34.273 Run status group 0 (all jobs): 00:16:34.273 READ: bw=38.1MiB/s (39.9MB/s), 7277KiB/s-12.0MiB/s (7451kB/s-12.6MB/s), io=38.1MiB (39.9MB), run=1000-1001msec 00:16:34.273 WRITE: bw=41.5MiB/s (43.5MB/s), 8184KiB/s-13.5MiB/s (8380kB/s-14.2MB/s), io=41.5MiB (43.5MB), run=1000-1001msec 00:16:34.273 00:16:34.273 Disk stats (read/write): 00:16:34.273 nvme0n1: ios=2626/3072, merge=0/0, ticks=408/387, in_queue=795, util=88.09% 00:16:34.273 nvme0n2: ios=1571/1804, merge=0/0, ticks=411/415, in_queue=826, util=88.25% 00:16:34.273 nvme0n3: ios=1536/1803, merge=0/0, ticks=408/397, in_queue=805, util=89.14% 00:16:34.273 nvme0n4: ios=2560/2709, merge=0/0, ticks=419/382, in_queue=801, util=89.80% 00:16:34.273 13:30:39 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:34.273 [global] 00:16:34.273 thread=1 00:16:34.273 invalidate=1 00:16:34.273 rw=randwrite 00:16:34.273 time_based=1 00:16:34.273 runtime=1 00:16:34.273 ioengine=libaio 00:16:34.273 direct=1 00:16:34.273 bs=4096 00:16:34.273 iodepth=1 00:16:34.273 norandommap=0 00:16:34.273 numjobs=1 00:16:34.273 00:16:34.273 verify_dump=1 00:16:34.273 verify_backlog=512 00:16:34.273 verify_state_save=0 00:16:34.273 do_verify=1 00:16:34.273 verify=crc32c-intel 00:16:34.273 [job0] 00:16:34.273 filename=/dev/nvme0n1 00:16:34.273 [job1] 00:16:34.273 filename=/dev/nvme0n2 00:16:34.273 [job2] 00:16:34.273 filename=/dev/nvme0n3 00:16:34.273 [job3] 00:16:34.273 filename=/dev/nvme0n4 00:16:34.273 Could not set queue depth (nvme0n1) 00:16:34.273 Could not set queue depth (nvme0n2) 00:16:34.273 Could not set queue depth (nvme0n3) 00:16:34.273 Could not set queue depth (nvme0n4) 00:16:34.273 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:34.273 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:34.273 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:34.273 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:34.273 fio-3.35 00:16:34.273 Starting 4 threads 00:16:35.663 00:16:35.663 job0: (groupid=0, jobs=1): err= 0: pid=87295: Sun Dec 15 13:30:41 2024 00:16:35.663 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:35.663 slat (nsec): min=10932, max=58207, avg=16255.72, stdev=4569.43 00:16:35.663 clat (usec): min=140, max=2228, avg=245.12, stdev=69.79 00:16:35.663 lat (usec): min=157, max=2243, avg=261.37, stdev=69.25 00:16:35.663 clat percentiles (usec): 00:16:35.663 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 163], 20.00th=[ 196], 00:16:35.663 | 30.00th=[ 227], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 255], 00:16:35.663 | 70.00th=[ 265], 80.00th=[ 281], 90.00th=[ 318], 95.00th=[ 347], 00:16:35.663 | 99.00th=[ 379], 99.50th=[ 383], 99.90th=[ 545], 99.95th=[ 545], 00:16:35.663 | 99.99th=[ 2245] 00:16:35.663 write: IOPS=2115, BW=8464KiB/s (8667kB/s)(8472KiB/1001msec); 0 zone resets 00:16:35.663 slat (nsec): min=10760, max=82279, avg=20932.99, stdev=6575.22 00:16:35.663 clat (usec): min=100, max=317, avg=195.26, stdev=56.72 00:16:35.663 lat (usec): min=123, max=342, avg=216.19, stdev=54.62 00:16:35.663 clat percentiles (usec): 00:16:35.663 | 1.00th=[ 111], 5.00th=[ 115], 10.00th=[ 119], 20.00th=[ 126], 00:16:35.663 | 30.00th=[ 143], 40.00th=[ 176], 50.00th=[ 219], 60.00th=[ 227], 00:16:35.663 | 70.00th=[ 237], 80.00th=[ 251], 90.00th=[ 262], 95.00th=[ 273], 00:16:35.663 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 306], 99.95th=[ 306], 00:16:35.663 | 99.99th=[ 318] 00:16:35.663 bw ( KiB/s): min= 9768, max= 9768, per=29.59%, avg=9768.00, stdev= 0.00, samples=1 00:16:35.663 iops : min= 2442, max= 2442, avg=2442.00, stdev= 0.00, samples=1 00:16:35.663 lat (usec) : 250=67.83%, 500=32.09%, 750=0.05% 00:16:35.663 lat (msec) : 4=0.02% 00:16:35.663 cpu : usr=1.80%, sys=5.90%, ctx=4167, majf=0, minf=9 00:16:35.663 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:35.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.663 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.663 issued rwts: total=2048,2118,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.663 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:35.664 job1: (groupid=0, jobs=1): err= 0: pid=87296: Sun Dec 15 13:30:41 2024 00:16:35.664 read: IOPS=1809, BW=7237KiB/s (7410kB/s)(7244KiB/1001msec) 00:16:35.664 slat (nsec): min=10566, max=65465, avg=12667.73, stdev=4178.78 00:16:35.664 clat (usec): min=198, max=513, avg=258.73, stdev=28.76 00:16:35.664 lat (usec): min=213, max=526, avg=271.40, stdev=29.08 00:16:35.664 clat percentiles (usec): 00:16:35.664 | 1.00th=[ 225], 5.00th=[ 229], 10.00th=[ 231], 20.00th=[ 237], 00:16:35.664 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 265], 00:16:35.664 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 297], 00:16:35.664 | 99.00th=[ 330], 99.50th=[ 457], 99.90th=[ 494], 99.95th=[ 515], 00:16:35.664 | 99.99th=[ 515] 00:16:35.664 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:35.664 slat (usec): min=14, max=182, avg=21.76, stdev= 8.24 00:16:35.664 clat (usec): min=64, max=671, avg=223.63, stdev=42.72 00:16:35.664 lat (usec): min=121, max=707, avg=245.40, stdev=43.37 00:16:35.664 clat percentiles (usec): 00:16:35.664 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 192], 00:16:35.664 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 212], 60.00th=[ 223], 00:16:35.664 | 70.00th=[ 235], 80.00th=[ 262], 90.00th=[ 285], 95.00th=[ 297], 00:16:35.664 | 99.00th=[ 330], 99.50th=[ 379], 99.90th=[ 529], 99.95th=[ 570], 00:16:35.664 | 99.99th=[ 668] 00:16:35.664 bw ( KiB/s): min= 8192, max= 8192, per=24.81%, avg=8192.00, stdev= 0.00, samples=1 00:16:35.664 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:35.664 lat (usec) : 100=0.10%, 250=61.05%, 500=38.71%, 750=0.13% 00:16:35.664 cpu : usr=1.00%, sys=5.70%, ctx=3866, majf=0, minf=11 00:16:35.664 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:35.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.664 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.664 issued rwts: total=1811,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.664 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:35.664 job2: (groupid=0, jobs=1): err= 0: pid=87297: Sun Dec 15 13:30:41 2024 00:16:35.664 read: IOPS=1669, BW=6677KiB/s (6838kB/s)(6684KiB/1001msec) 00:16:35.664 slat (nsec): min=9082, max=61904, avg=16627.31, stdev=5083.96 00:16:35.664 clat (usec): min=164, max=400, avg=256.67, stdev=33.74 00:16:35.664 lat (usec): min=175, max=414, avg=273.30, stdev=33.51 00:16:35.664 clat percentiles (usec): 00:16:35.664 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 229], 00:16:35.664 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 258], 00:16:35.664 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 306], 95.00th=[ 330], 00:16:35.664 | 99.00th=[ 359], 99.50th=[ 367], 99.90th=[ 396], 99.95th=[ 400], 00:16:35.664 | 99.99th=[ 400] 00:16:35.664 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:35.664 slat (nsec): min=11248, max=80916, avg=22236.74, stdev=9432.44 00:16:35.664 clat (usec): min=115, max=7305, avg=240.01, stdev=186.80 00:16:35.664 lat (usec): min=132, max=7332, avg=262.25, stdev=188.02 00:16:35.664 clat percentiles (usec): 00:16:35.664 | 1.00th=[ 127], 5.00th=[ 139], 10.00th=[ 155], 20.00th=[ 186], 00:16:35.664 | 30.00th=[ 221], 40.00th=[ 231], 50.00th=[ 241], 60.00th=[ 251], 00:16:35.664 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 293], 00:16:35.664 | 99.00th=[ 330], 99.50th=[ 494], 99.90th=[ 2147], 99.95th=[ 2147], 00:16:35.664 | 99.99th=[ 7308] 00:16:35.664 bw ( KiB/s): min= 8192, max= 8192, per=24.81%, avg=8192.00, stdev= 0.00, samples=1 00:16:35.664 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:35.664 lat (usec) : 250=55.04%, 500=44.69%, 750=0.08% 00:16:35.664 lat (msec) : 2=0.05%, 4=0.11%, 10=0.03% 00:16:35.664 cpu : usr=1.80%, sys=5.60%, ctx=3719, majf=0, minf=15 00:16:35.664 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:35.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.664 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.664 issued rwts: total=1671,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.664 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:35.664 job3: (groupid=0, jobs=1): err= 0: pid=87298: Sun Dec 15 13:30:41 2024 00:16:35.664 read: IOPS=1810, BW=7241KiB/s (7415kB/s)(7248KiB/1001msec) 00:16:35.664 slat (nsec): min=9146, max=60940, avg=14349.25, stdev=4823.63 00:16:35.664 clat (usec): min=152, max=582, avg=256.96, stdev=28.33 00:16:35.664 lat (usec): min=173, max=600, avg=271.31, stdev=28.98 00:16:35.664 clat percentiles (usec): 00:16:35.664 | 1.00th=[ 223], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 235], 00:16:35.664 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 262], 00:16:35.664 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 293], 00:16:35.664 | 99.00th=[ 322], 99.50th=[ 437], 99.90th=[ 498], 99.95th=[ 586], 00:16:35.664 | 99.99th=[ 586] 00:16:35.664 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:35.664 slat (usec): min=15, max=160, avg=22.54, stdev= 7.80 00:16:35.664 clat (usec): min=117, max=726, avg=222.93, stdev=43.38 00:16:35.664 lat (usec): min=139, max=760, avg=245.47, stdev=45.64 00:16:35.664 clat percentiles (usec): 00:16:35.664 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 190], 00:16:35.664 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 212], 60.00th=[ 221], 00:16:35.664 | 70.00th=[ 235], 80.00th=[ 258], 90.00th=[ 281], 95.00th=[ 297], 00:16:35.664 | 99.00th=[ 326], 99.50th=[ 367], 99.90th=[ 603], 99.95th=[ 627], 00:16:35.664 | 99.99th=[ 725] 00:16:35.664 bw ( KiB/s): min= 8192, max= 8192, per=24.81%, avg=8192.00, stdev= 0.00, samples=1 00:16:35.664 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:35.664 lat (usec) : 250=62.56%, 500=37.25%, 750=0.18% 00:16:35.664 cpu : usr=1.30%, sys=5.70%, ctx=3865, majf=0, minf=9 00:16:35.664 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:35.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.664 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.664 issued rwts: total=1812,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.664 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:35.664 00:16:35.664 Run status group 0 (all jobs): 00:16:35.664 READ: bw=28.7MiB/s (30.0MB/s), 6677KiB/s-8184KiB/s (6838kB/s-8380kB/s), io=28.7MiB (30.1MB), run=1001-1001msec 00:16:35.664 WRITE: bw=32.2MiB/s (33.8MB/s), 8184KiB/s-8464KiB/s (8380kB/s-8667kB/s), io=32.3MiB (33.8MB), run=1001-1001msec 00:16:35.664 00:16:35.664 Disk stats (read/write): 00:16:35.664 nvme0n1: ios=1709/2048, merge=0/0, ticks=428/409, in_queue=837, util=89.08% 00:16:35.664 nvme0n2: ios=1585/1823, merge=0/0, ticks=424/438, in_queue=862, util=89.20% 00:16:35.664 nvme0n3: ios=1564/1681, merge=0/0, ticks=443/391, in_queue=834, util=89.56% 00:16:35.664 nvme0n4: ios=1536/1821, merge=0/0, ticks=405/429, in_queue=834, util=89.80% 00:16:35.664 13:30:41 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:35.664 [global] 00:16:35.664 thread=1 00:16:35.664 invalidate=1 00:16:35.664 rw=write 00:16:35.664 time_based=1 00:16:35.664 runtime=1 00:16:35.664 ioengine=libaio 00:16:35.664 direct=1 00:16:35.664 bs=4096 00:16:35.664 iodepth=128 00:16:35.664 norandommap=0 00:16:35.664 numjobs=1 00:16:35.664 00:16:35.664 verify_dump=1 00:16:35.664 verify_backlog=512 00:16:35.664 verify_state_save=0 00:16:35.664 do_verify=1 00:16:35.664 verify=crc32c-intel 00:16:35.664 [job0] 00:16:35.664 filename=/dev/nvme0n1 00:16:35.664 [job1] 00:16:35.664 filename=/dev/nvme0n2 00:16:35.664 [job2] 00:16:35.664 filename=/dev/nvme0n3 00:16:35.664 [job3] 00:16:35.664 filename=/dev/nvme0n4 00:16:35.664 Could not set queue depth (nvme0n1) 00:16:35.664 Could not set queue depth (nvme0n2) 00:16:35.664 Could not set queue depth (nvme0n3) 00:16:35.664 Could not set queue depth (nvme0n4) 00:16:35.664 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:35.664 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:35.664 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:35.664 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:35.664 fio-3.35 00:16:35.664 Starting 4 threads 00:16:37.042 00:16:37.042 job0: (groupid=0, jobs=1): err= 0: pid=87357: Sun Dec 15 13:30:42 2024 00:16:37.042 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:16:37.042 slat (usec): min=7, max=6305, avg=91.22, stdev=553.34 00:16:37.042 clat (usec): min=7273, max=18385, avg=11834.17, stdev=992.70 00:16:37.042 lat (usec): min=7296, max=21224, avg=11925.40, stdev=1108.06 00:16:37.042 clat percentiles (usec): 00:16:37.042 | 1.00th=[ 8848], 5.00th=[10552], 10.00th=[11076], 20.00th=[11338], 00:16:37.042 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11731], 60.00th=[11863], 00:16:37.042 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12649], 95.00th=[13435], 00:16:37.042 | 99.00th=[16188], 99.50th=[16581], 99.90th=[17957], 99.95th=[17957], 00:16:37.042 | 99.99th=[18482] 00:16:37.042 write: IOPS=5508, BW=21.5MiB/s (22.6MB/s)(21.5MiB/1001msec); 0 zone resets 00:16:37.042 slat (usec): min=10, max=5016, avg=89.76, stdev=523.83 00:16:37.042 clat (usec): min=326, max=17712, avg=11976.03, stdev=1471.23 00:16:37.042 lat (usec): min=4391, max=17730, avg=12065.80, stdev=1474.13 00:16:37.042 clat percentiles (usec): 00:16:37.042 | 1.00th=[ 5932], 5.00th=[ 8225], 10.00th=[10945], 20.00th=[11600], 00:16:37.042 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:16:37.042 | 70.00th=[12518], 80.00th=[12780], 90.00th=[12911], 95.00th=[13042], 00:16:37.042 | 99.00th=[15664], 99.50th=[16909], 99.90th=[17695], 99.95th=[17695], 00:16:37.042 | 99.99th=[17695] 00:16:37.042 bw ( KiB/s): min=20561, max=22568, per=26.97%, avg=21564.50, stdev=1419.16, samples=2 00:16:37.042 iops : min= 5140, max= 5642, avg=5391.00, stdev=354.97, samples=2 00:16:37.042 lat (usec) : 500=0.01% 00:16:37.042 lat (msec) : 10=5.19%, 20=94.80% 00:16:37.042 cpu : usr=5.00%, sys=13.09%, ctx=321, majf=0, minf=3 00:16:37.042 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:37.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.042 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:37.042 issued rwts: total=5120,5514,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:37.042 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:37.042 job1: (groupid=0, jobs=1): err= 0: pid=87358: Sun Dec 15 13:30:42 2024 00:16:37.042 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:16:37.042 slat (usec): min=9, max=3493, avg=90.73, stdev=428.45 00:16:37.042 clat (usec): min=8579, max=15653, avg=11921.20, stdev=1223.40 00:16:37.042 lat (usec): min=8680, max=15667, avg=12011.93, stdev=1205.18 00:16:37.042 clat percentiles (usec): 00:16:37.042 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[11076], 00:16:37.042 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:16:37.042 | 70.00th=[12518], 80.00th=[12911], 90.00th=[13435], 95.00th=[13829], 00:16:37.042 | 99.00th=[14484], 99.50th=[14746], 99.90th=[15008], 99.95th=[15270], 00:16:37.042 | 99.99th=[15664] 00:16:37.042 write: IOPS=5331, BW=20.8MiB/s (21.8MB/s)(20.9MiB/1002msec); 0 zone resets 00:16:37.042 slat (usec): min=10, max=3784, avg=92.61, stdev=377.31 00:16:37.042 clat (usec): min=1815, max=15934, avg=12282.45, stdev=1603.09 00:16:37.042 lat (usec): min=1862, max=15962, avg=12375.06, stdev=1582.32 00:16:37.042 clat percentiles (usec): 00:16:37.042 | 1.00th=[ 6587], 5.00th=[ 9634], 10.00th=[ 9765], 20.00th=[11600], 00:16:37.042 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:16:37.042 | 70.00th=[13042], 80.00th=[13173], 90.00th=[13435], 95.00th=[13698], 00:16:37.042 | 99.00th=[15139], 99.50th=[15533], 99.90th=[15926], 99.95th=[15926], 00:16:37.042 | 99.99th=[15926] 00:16:37.043 bw ( KiB/s): min=20521, max=21240, per=26.11%, avg=20880.50, stdev=508.41, samples=2 00:16:37.043 iops : min= 5130, max= 5310, avg=5220.00, stdev=127.28, samples=2 00:16:37.043 lat (msec) : 2=0.07%, 4=0.23%, 10=11.31%, 20=88.40% 00:16:37.043 cpu : usr=4.90%, sys=14.19%, ctx=730, majf=0, minf=7 00:16:37.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:37.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:37.043 issued rwts: total=5120,5342,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:37.043 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:37.043 job2: (groupid=0, jobs=1): err= 0: pid=87359: Sun Dec 15 13:30:42 2024 00:16:37.043 read: IOPS=4196, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1004msec) 00:16:37.043 slat (usec): min=5, max=4833, avg=107.76, stdev=491.24 00:16:37.043 clat (usec): min=188, max=17601, avg=14162.84, stdev=1337.93 00:16:37.043 lat (usec): min=5021, max=18899, avg=14270.59, stdev=1261.67 00:16:37.043 clat percentiles (usec): 00:16:37.043 | 1.00th=[ 8717], 5.00th=[11863], 10.00th=[12518], 20.00th=[13960], 00:16:37.043 | 30.00th=[14091], 40.00th=[14222], 50.00th=[14353], 60.00th=[14484], 00:16:37.043 | 70.00th=[14615], 80.00th=[15008], 90.00th=[15270], 95.00th=[15664], 00:16:37.043 | 99.00th=[16188], 99.50th=[16319], 99.90th=[16909], 99.95th=[17695], 00:16:37.043 | 99.99th=[17695] 00:16:37.043 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:16:37.043 slat (usec): min=9, max=3748, avg=111.30, stdev=443.95 00:16:37.043 clat (usec): min=11045, max=17490, avg=14551.38, stdev=1250.60 00:16:37.043 lat (usec): min=11067, max=17552, avg=14662.68, stdev=1213.17 00:16:37.043 clat percentiles (usec): 00:16:37.043 | 1.00th=[11600], 5.00th=[12125], 10.00th=[12387], 20.00th=[13173], 00:16:37.043 | 30.00th=[14353], 40.00th=[14746], 50.00th=[14877], 60.00th=[15008], 00:16:37.043 | 70.00th=[15270], 80.00th=[15533], 90.00th=[15926], 95.00th=[16188], 00:16:37.043 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17433], 99.95th=[17433], 00:16:37.043 | 99.99th=[17433] 00:16:37.043 bw ( KiB/s): min=17728, max=19048, per=22.99%, avg=18388.00, stdev=933.38, samples=2 00:16:37.043 iops : min= 4432, max= 4762, avg=4597.00, stdev=233.35, samples=2 00:16:37.043 lat (usec) : 250=0.01% 00:16:37.043 lat (msec) : 10=0.73%, 20=99.26% 00:16:37.043 cpu : usr=3.79%, sys=13.26%, ctx=636, majf=0, minf=9 00:16:37.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:37.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:37.043 issued rwts: total=4213,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:37.043 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:37.043 job3: (groupid=0, jobs=1): err= 0: pid=87360: Sun Dec 15 13:30:42 2024 00:16:37.043 read: IOPS=4307, BW=16.8MiB/s (17.6MB/s)(16.9MiB/1004msec) 00:16:37.043 slat (usec): min=9, max=3369, avg=105.46, stdev=478.83 00:16:37.043 clat (usec): min=3001, max=17984, avg=13972.42, stdev=1493.43 00:16:37.043 lat (usec): min=3139, max=18054, avg=14077.88, stdev=1431.46 00:16:37.043 clat percentiles (usec): 00:16:37.043 | 1.00th=[ 7046], 5.00th=[11600], 10.00th=[12256], 20.00th=[13566], 00:16:37.043 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14222], 60.00th=[14353], 00:16:37.043 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15139], 95.00th=[15270], 00:16:37.043 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17433], 99.95th=[17957], 00:16:37.043 | 99.99th=[17957] 00:16:37.043 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:16:37.043 slat (usec): min=12, max=3554, avg=110.22, stdev=465.73 00:16:37.043 clat (usec): min=10836, max=17835, avg=14370.62, stdev=1301.28 00:16:37.043 lat (usec): min=10858, max=17875, avg=14480.84, stdev=1265.98 00:16:37.043 clat percentiles (usec): 00:16:37.043 | 1.00th=[11600], 5.00th=[11863], 10.00th=[12125], 20.00th=[12911], 00:16:37.043 | 30.00th=[14091], 40.00th=[14615], 50.00th=[14746], 60.00th=[14877], 00:16:37.043 | 70.00th=[15008], 80.00th=[15270], 90.00th=[15795], 95.00th=[16057], 00:16:37.043 | 99.00th=[17171], 99.50th=[17695], 99.90th=[17695], 99.95th=[17957], 00:16:37.043 | 99.99th=[17957] 00:16:37.043 bw ( KiB/s): min=17875, max=19024, per=23.07%, avg=18449.50, stdev=812.47, samples=2 00:16:37.043 iops : min= 4468, max= 4756, avg=4612.00, stdev=203.65, samples=2 00:16:37.043 lat (msec) : 4=0.35%, 10=0.38%, 20=99.27% 00:16:37.043 cpu : usr=4.39%, sys=12.96%, ctx=652, majf=0, minf=8 00:16:37.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:37.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:37.043 issued rwts: total=4325,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:37.043 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:37.043 00:16:37.043 Run status group 0 (all jobs): 00:16:37.043 READ: bw=73.1MiB/s (76.6MB/s), 16.4MiB/s-20.0MiB/s (17.2MB/s-20.9MB/s), io=73.4MiB (76.9MB), run=1001-1004msec 00:16:37.043 WRITE: bw=78.1MiB/s (81.9MB/s), 17.9MiB/s-21.5MiB/s (18.8MB/s-22.6MB/s), io=78.4MiB (82.2MB), run=1001-1004msec 00:16:37.043 00:16:37.043 Disk stats (read/write): 00:16:37.043 nvme0n1: ios=4558/4608, merge=0/0, ticks=23925/23821, in_queue=47746, util=87.78% 00:16:37.043 nvme0n2: ios=4351/4608, merge=0/0, ticks=16220/16673, in_queue=32893, util=87.74% 00:16:37.043 nvme0n3: ios=3584/3991, merge=0/0, ticks=12113/12668, in_queue=24781, util=89.25% 00:16:37.043 nvme0n4: ios=3584/4064, merge=0/0, ticks=11796/12470, in_queue=24266, util=89.70% 00:16:37.043 13:30:42 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:37.043 [global] 00:16:37.043 thread=1 00:16:37.043 invalidate=1 00:16:37.043 rw=randwrite 00:16:37.043 time_based=1 00:16:37.043 runtime=1 00:16:37.043 ioengine=libaio 00:16:37.043 direct=1 00:16:37.043 bs=4096 00:16:37.043 iodepth=128 00:16:37.043 norandommap=0 00:16:37.043 numjobs=1 00:16:37.043 00:16:37.043 verify_dump=1 00:16:37.043 verify_backlog=512 00:16:37.043 verify_state_save=0 00:16:37.043 do_verify=1 00:16:37.043 verify=crc32c-intel 00:16:37.043 [job0] 00:16:37.043 filename=/dev/nvme0n1 00:16:37.043 [job1] 00:16:37.043 filename=/dev/nvme0n2 00:16:37.043 [job2] 00:16:37.043 filename=/dev/nvme0n3 00:16:37.043 [job3] 00:16:37.043 filename=/dev/nvme0n4 00:16:37.043 Could not set queue depth (nvme0n1) 00:16:37.043 Could not set queue depth (nvme0n2) 00:16:37.044 Could not set queue depth (nvme0n3) 00:16:37.044 Could not set queue depth (nvme0n4) 00:16:37.044 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:37.044 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:37.044 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:37.044 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:37.044 fio-3.35 00:16:37.044 Starting 4 threads 00:16:38.421 00:16:38.421 job0: (groupid=0, jobs=1): err= 0: pid=87414: Sun Dec 15 13:30:43 2024 00:16:38.421 read: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec) 00:16:38.421 slat (usec): min=3, max=10583, avg=179.23, stdev=927.88 00:16:38.421 clat (usec): min=14991, max=37386, avg=22309.60, stdev=3189.46 00:16:38.421 lat (usec): min=16074, max=46958, avg=22488.83, stdev=3294.16 00:16:38.421 clat percentiles (usec): 00:16:38.421 | 1.00th=[16319], 5.00th=[17957], 10.00th=[19006], 20.00th=[19530], 00:16:38.421 | 30.00th=[20841], 40.00th=[21365], 50.00th=[21890], 60.00th=[22152], 00:16:38.421 | 70.00th=[23462], 80.00th=[24249], 90.00th=[26608], 95.00th=[28181], 00:16:38.421 | 99.00th=[32637], 99.50th=[35914], 99.90th=[37487], 99.95th=[37487], 00:16:38.421 | 99.99th=[37487] 00:16:38.421 write: IOPS=2967, BW=11.6MiB/s (12.2MB/s)(11.7MiB/1008msec); 0 zone resets 00:16:38.421 slat (usec): min=5, max=10163, avg=174.06, stdev=746.70 00:16:38.421 clat (usec): min=7362, max=35052, avg=23076.53, stdev=3168.43 00:16:38.421 lat (usec): min=7864, max=35267, avg=23250.59, stdev=3230.95 00:16:38.421 clat percentiles (usec): 00:16:38.421 | 1.00th=[14353], 5.00th=[16909], 10.00th=[18744], 20.00th=[21103], 00:16:38.421 | 30.00th=[21890], 40.00th=[22414], 50.00th=[23462], 60.00th=[24511], 00:16:38.421 | 70.00th=[25035], 80.00th=[25560], 90.00th=[26084], 95.00th=[26870], 00:16:38.421 | 99.00th=[30540], 99.50th=[31589], 99.90th=[32637], 99.95th=[32900], 00:16:38.421 | 99.99th=[34866] 00:16:38.421 bw ( KiB/s): min=10624, max=12288, per=16.74%, avg=11456.00, stdev=1176.63, samples=2 00:16:38.421 iops : min= 2656, max= 3072, avg=2864.00, stdev=294.16, samples=2 00:16:38.421 lat (msec) : 10=0.20%, 20=16.45%, 50=83.35% 00:16:38.421 cpu : usr=3.18%, sys=6.95%, ctx=879, majf=0, minf=23 00:16:38.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:38.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:38.421 issued rwts: total=2560,2991,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:38.421 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:38.421 job1: (groupid=0, jobs=1): err= 0: pid=87415: Sun Dec 15 13:30:43 2024 00:16:38.421 read: IOPS=5647, BW=22.1MiB/s (23.1MB/s)(22.3MiB/1012msec) 00:16:38.421 slat (usec): min=4, max=11517, avg=83.24, stdev=557.24 00:16:38.421 clat (usec): min=3907, max=22596, avg=11158.84, stdev=2638.25 00:16:38.421 lat (usec): min=3919, max=22618, avg=11242.09, stdev=2665.59 00:16:38.421 clat percentiles (usec): 00:16:38.421 | 1.00th=[ 6980], 5.00th=[ 8225], 10.00th=[ 8455], 20.00th=[ 9110], 00:16:38.421 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10552], 60.00th=[11076], 00:16:38.421 | 70.00th=[11731], 80.00th=[12649], 90.00th=[14091], 95.00th=[17171], 00:16:38.421 | 99.00th=[20579], 99.50th=[21103], 99.90th=[21890], 99.95th=[22676], 00:16:38.421 | 99.99th=[22676] 00:16:38.421 write: IOPS=6071, BW=23.7MiB/s (24.9MB/s)(24.0MiB/1012msec); 0 zone resets 00:16:38.421 slat (usec): min=3, max=11879, avg=79.42, stdev=539.74 00:16:38.421 clat (usec): min=3505, max=23794, avg=10487.82, stdev=2144.77 00:16:38.421 lat (usec): min=3526, max=26960, avg=10567.24, stdev=2215.08 00:16:38.421 clat percentiles (usec): 00:16:38.421 | 1.00th=[ 4359], 5.00th=[ 6718], 10.00th=[ 8356], 20.00th=[ 9372], 00:16:38.421 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10421], 60.00th=[10683], 00:16:38.421 | 70.00th=[11338], 80.00th=[11994], 90.00th=[12387], 95.00th=[12911], 00:16:38.421 | 99.00th=[18220], 99.50th=[19006], 99.90th=[21103], 99.95th=[21365], 00:16:38.421 | 99.99th=[23725] 00:16:38.421 bw ( KiB/s): min=24216, max=24576, per=35.66%, avg=24396.00, stdev=254.56, samples=2 00:16:38.421 iops : min= 6054, max= 6144, avg=6099.00, stdev=63.64, samples=2 00:16:38.421 lat (msec) : 4=0.36%, 10=34.29%, 20=64.70%, 50=0.65% 00:16:38.421 cpu : usr=4.55%, sys=14.34%, ctx=561, majf=0, minf=11 00:16:38.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:16:38.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:38.421 issued rwts: total=5715,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:38.421 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:38.421 job2: (groupid=0, jobs=1): err= 0: pid=87416: Sun Dec 15 13:30:43 2024 00:16:38.421 read: IOPS=5047, BW=19.7MiB/s (20.7MB/s)(20.0MiB/1013msec) 00:16:38.421 slat (usec): min=5, max=26161, avg=101.38, stdev=743.81 00:16:38.421 clat (usec): min=3735, max=53001, avg=13336.50, stdev=5107.80 00:16:38.421 lat (usec): min=4261, max=56726, avg=13437.88, stdev=5151.56 00:16:38.421 clat percentiles (usec): 00:16:38.421 | 1.00th=[ 7963], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[10552], 00:16:38.421 | 30.00th=[11338], 40.00th=[11731], 50.00th=[11994], 60.00th=[12649], 00:16:38.421 | 70.00th=[13698], 80.00th=[14877], 90.00th=[17171], 95.00th=[21103], 00:16:38.421 | 99.00th=[40109], 99.50th=[40109], 99.90th=[40109], 99.95th=[40109], 00:16:38.421 | 99.99th=[53216] 00:16:38.421 write: IOPS=5054, BW=19.7MiB/s (20.7MB/s)(20.0MiB/1013msec); 0 zone resets 00:16:38.421 slat (usec): min=5, max=9830, avg=87.97, stdev=588.11 00:16:38.421 clat (usec): min=3823, max=24018, avg=11736.10, stdev=2099.46 00:16:38.421 lat (usec): min=3845, max=24028, avg=11824.07, stdev=2179.79 00:16:38.421 clat percentiles (usec): 00:16:38.421 | 1.00th=[ 4883], 5.00th=[ 7046], 10.00th=[ 9110], 20.00th=[10945], 00:16:38.421 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11994], 60.00th=[12256], 00:16:38.421 | 70.00th=[12780], 80.00th=[13435], 90.00th=[13698], 95.00th=[13960], 00:16:38.421 | 99.00th=[14484], 99.50th=[19006], 99.90th=[22938], 99.95th=[23987], 00:16:38.421 | 99.99th=[23987] 00:16:38.421 bw ( KiB/s): min=20072, max=20888, per=29.93%, avg=20480.00, stdev=577.00, samples=2 00:16:38.421 iops : min= 5018, max= 5222, avg=5120.00, stdev=144.25, samples=2 00:16:38.421 lat (msec) : 4=0.10%, 10=14.24%, 20=82.51%, 50=3.15%, 100=0.01% 00:16:38.421 cpu : usr=5.14%, sys=12.25%, ctx=515, majf=0, minf=9 00:16:38.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:38.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:38.421 issued rwts: total=5113,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:38.421 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:38.421 job3: (groupid=0, jobs=1): err= 0: pid=87417: Sun Dec 15 13:30:43 2024 00:16:38.421 read: IOPS=2543, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1011msec) 00:16:38.421 slat (usec): min=4, max=15169, avg=185.20, stdev=964.60 00:16:38.421 clat (usec): min=10194, max=36234, avg=22843.49, stdev=3738.35 00:16:38.421 lat (usec): min=10765, max=37498, avg=23028.69, stdev=3823.35 00:16:38.421 clat percentiles (usec): 00:16:38.421 | 1.00th=[14877], 5.00th=[16909], 10.00th=[19792], 20.00th=[20841], 00:16:38.421 | 30.00th=[21365], 40.00th=[21627], 50.00th=[21890], 60.00th=[22152], 00:16:38.421 | 70.00th=[23462], 80.00th=[25560], 90.00th=[28181], 95.00th=[30540], 00:16:38.421 | 99.00th=[33817], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:16:38.421 | 99.99th=[36439] 00:16:38.421 write: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec); 0 zone resets 00:16:38.421 slat (usec): min=4, max=10727, avg=163.47, stdev=732.18 00:16:38.421 clat (usec): min=11502, max=34629, avg=22613.95, stdev=3800.98 00:16:38.421 lat (usec): min=11520, max=34664, avg=22777.41, stdev=3867.70 00:16:38.421 clat percentiles (usec): 00:16:38.421 | 1.00th=[12911], 5.00th=[15008], 10.00th=[17171], 20.00th=[20317], 00:16:38.421 | 30.00th=[21365], 40.00th=[22152], 50.00th=[22938], 60.00th=[24249], 00:16:38.421 | 70.00th=[24773], 80.00th=[25560], 90.00th=[26346], 95.00th=[27657], 00:16:38.421 | 99.00th=[31851], 99.50th=[32113], 99.90th=[32637], 99.95th=[33817], 00:16:38.421 | 99.99th=[34866] 00:16:38.421 bw ( KiB/s): min=11360, max=12263, per=17.26%, avg=11811.50, stdev=638.52, samples=2 00:16:38.421 iops : min= 2840, max= 3065, avg=2952.50, stdev=159.10, samples=2 00:16:38.421 lat (msec) : 20=15.75%, 50=84.25% 00:16:38.421 cpu : usr=3.27%, sys=6.93%, ctx=969, majf=0, minf=5 00:16:38.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:38.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:38.421 issued rwts: total=2571,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:38.421 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:38.421 00:16:38.421 Run status group 0 (all jobs): 00:16:38.421 READ: bw=61.5MiB/s (64.5MB/s), 9.92MiB/s-22.1MiB/s (10.4MB/s-23.1MB/s), io=62.3MiB (65.4MB), run=1008-1013msec 00:16:38.421 WRITE: bw=66.8MiB/s (70.1MB/s), 11.6MiB/s-23.7MiB/s (12.2MB/s-24.9MB/s), io=67.7MiB (71.0MB), run=1008-1013msec 00:16:38.421 00:16:38.421 Disk stats (read/write): 00:16:38.421 nvme0n1: ios=2161/2560, merge=0/0, ticks=23104/27953, in_queue=51057, util=86.47% 00:16:38.421 nvme0n2: ios=5126/5127, merge=0/0, ticks=52156/49629, in_queue=101785, util=88.78% 00:16:38.421 nvme0n3: ios=4383/4608, merge=0/0, ticks=51442/51252, in_queue=102694, util=89.20% 00:16:38.421 nvme0n4: ios=2226/2560, merge=0/0, ticks=24848/26997, in_queue=51845, util=89.65% 00:16:38.421 13:30:43 -- target/fio.sh@55 -- # sync 00:16:38.421 13:30:43 -- target/fio.sh@59 -- # fio_pid=87436 00:16:38.421 13:30:43 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:38.421 13:30:43 -- target/fio.sh@61 -- # sleep 3 00:16:38.421 [global] 00:16:38.421 thread=1 00:16:38.421 invalidate=1 00:16:38.421 rw=read 00:16:38.421 time_based=1 00:16:38.422 runtime=10 00:16:38.422 ioengine=libaio 00:16:38.422 direct=1 00:16:38.422 bs=4096 00:16:38.422 iodepth=1 00:16:38.422 norandommap=1 00:16:38.422 numjobs=1 00:16:38.422 00:16:38.422 [job0] 00:16:38.422 filename=/dev/nvme0n1 00:16:38.422 [job1] 00:16:38.422 filename=/dev/nvme0n2 00:16:38.422 [job2] 00:16:38.422 filename=/dev/nvme0n3 00:16:38.422 [job3] 00:16:38.422 filename=/dev/nvme0n4 00:16:38.422 Could not set queue depth (nvme0n1) 00:16:38.422 Could not set queue depth (nvme0n2) 00:16:38.422 Could not set queue depth (nvme0n3) 00:16:38.422 Could not set queue depth (nvme0n4) 00:16:38.422 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:38.422 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:38.422 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:38.422 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:38.422 fio-3.35 00:16:38.422 Starting 4 threads 00:16:41.707 13:30:46 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:41.707 fio: pid=87479, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:41.707 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=65040384, buflen=4096 00:16:41.708 13:30:47 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:41.708 fio: pid=87478, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:41.708 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=44326912, buflen=4096 00:16:41.708 13:30:47 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:41.708 13:30:47 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:41.966 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=49438720, buflen=4096 00:16:41.966 fio: pid=87476, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:42.224 13:30:47 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:42.224 13:30:47 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:42.483 fio: pid=87477, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:42.483 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=15400960, buflen=4096 00:16:42.483 00:16:42.483 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87476: Sun Dec 15 13:30:47 2024 00:16:42.483 read: IOPS=3499, BW=13.7MiB/s (14.3MB/s)(47.1MiB/3449msec) 00:16:42.483 slat (usec): min=8, max=12464, avg=20.10, stdev=179.58 00:16:42.483 clat (usec): min=43, max=3930, avg=264.06, stdev=70.92 00:16:42.483 lat (usec): min=127, max=12692, avg=284.16, stdev=192.44 00:16:42.483 clat percentiles (usec): 00:16:42.483 | 1.00th=[ 137], 5.00th=[ 196], 10.00th=[ 217], 20.00th=[ 249], 00:16:42.483 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 273], 00:16:42.483 | 70.00th=[ 281], 80.00th=[ 285], 90.00th=[ 293], 95.00th=[ 302], 00:16:42.483 | 99.00th=[ 318], 99.50th=[ 326], 99.90th=[ 644], 99.95th=[ 1647], 00:16:42.484 | 99.99th=[ 3261] 00:16:42.484 bw ( KiB/s): min=13622, max=14336, per=22.02%, avg=13773.00, stdev=278.76, samples=6 00:16:42.484 iops : min= 3405, max= 3584, avg=3443.17, stdev=69.74, samples=6 00:16:42.484 lat (usec) : 50=0.01%, 250=20.93%, 500=78.91%, 750=0.07%, 1000=0.01% 00:16:42.484 lat (msec) : 2=0.03%, 4=0.04% 00:16:42.484 cpu : usr=1.36%, sys=4.70%, ctx=12099, majf=0, minf=1 00:16:42.484 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:42.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.484 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.484 issued rwts: total=12071,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:42.484 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:42.484 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87477: Sun Dec 15 13:30:47 2024 00:16:42.484 read: IOPS=5346, BW=20.9MiB/s (21.9MB/s)(78.7MiB/3768msec) 00:16:42.484 slat (usec): min=9, max=13869, avg=17.95, stdev=166.24 00:16:42.484 clat (nsec): min=1497, max=4176.5k, avg=167879.17, stdev=64714.40 00:16:42.484 lat (usec): min=125, max=14037, avg=185.83, stdev=178.35 00:16:42.484 clat percentiles (usec): 00:16:42.484 | 1.00th=[ 120], 5.00th=[ 127], 10.00th=[ 141], 20.00th=[ 149], 00:16:42.484 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 163], 00:16:42.484 | 70.00th=[ 172], 80.00th=[ 180], 90.00th=[ 198], 95.00th=[ 247], 00:16:42.484 | 99.00th=[ 285], 99.50th=[ 297], 99.90th=[ 469], 99.95th=[ 619], 00:16:42.484 | 99.99th=[ 3064] 00:16:42.484 bw ( KiB/s): min=17963, max=22712, per=34.02%, avg=21278.86, stdev=2053.54, samples=7 00:16:42.484 iops : min= 4490, max= 5678, avg=5319.57, stdev=513.58, samples=7 00:16:42.484 lat (usec) : 2=0.01%, 4=0.01%, 250=95.65%, 500=4.26%, 750=0.03% 00:16:42.484 lat (usec) : 1000=0.01% 00:16:42.484 lat (msec) : 2=0.01%, 4=0.02%, 10=0.01% 00:16:42.484 cpu : usr=1.41%, sys=6.74%, ctx=20172, majf=0, minf=1 00:16:42.484 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:42.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.484 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.484 issued rwts: total=20145,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:42.484 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:42.484 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87478: Sun Dec 15 13:30:47 2024 00:16:42.484 read: IOPS=3410, BW=13.3MiB/s (14.0MB/s)(42.3MiB/3173msec) 00:16:42.484 slat (usec): min=8, max=12469, avg=17.68, stdev=152.39 00:16:42.484 clat (usec): min=121, max=18555, avg=274.10, stdev=179.94 00:16:42.484 lat (usec): min=144, max=18570, avg=291.78, stdev=235.03 00:16:42.484 clat percentiles (usec): 00:16:42.484 | 1.00th=[ 157], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 260], 00:16:42.484 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:16:42.484 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 306], 00:16:42.484 | 99.00th=[ 318], 99.50th=[ 326], 99.90th=[ 474], 99.95th=[ 709], 00:16:42.484 | 99.99th=[ 2638] 00:16:42.484 bw ( KiB/s): min=13432, max=13776, per=21.81%, avg=13643.50, stdev=115.16, samples=6 00:16:42.484 iops : min= 3358, max= 3444, avg=3410.83, stdev=28.76, samples=6 00:16:42.484 lat (usec) : 250=8.51%, 500=91.39%, 750=0.06% 00:16:42.484 lat (msec) : 2=0.02%, 4=0.01%, 20=0.01% 00:16:42.484 cpu : usr=0.95%, sys=4.38%, ctx=10836, majf=0, minf=2 00:16:42.484 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:42.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.484 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.484 issued rwts: total=10823,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:42.484 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:42.484 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87479: Sun Dec 15 13:30:47 2024 00:16:42.484 read: IOPS=5466, BW=21.4MiB/s (22.4MB/s)(62.0MiB/2905msec) 00:16:42.484 slat (nsec): min=11319, max=75857, avg=15527.44, stdev=3688.25 00:16:42.484 clat (usec): min=133, max=1808, avg=166.10, stdev=29.41 00:16:42.484 lat (usec): min=146, max=1822, avg=181.63, stdev=29.54 00:16:42.484 clat percentiles (usec): 00:16:42.484 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 151], 00:16:42.484 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 163], 00:16:42.484 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 186], 95.00th=[ 198], 00:16:42.484 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 465], 99.95th=[ 529], 00:16:42.484 | 99.99th=[ 750] 00:16:42.484 bw ( KiB/s): min=21351, max=22648, per=35.63%, avg=22287.80, stdev=530.29, samples=5 00:16:42.484 iops : min= 5337, max= 5662, avg=5571.80, stdev=132.90, samples=5 00:16:42.484 lat (usec) : 250=97.71%, 500=2.22%, 750=0.06% 00:16:42.484 lat (msec) : 2=0.01% 00:16:42.484 cpu : usr=1.62%, sys=6.82%, ctx=15882, majf=0, minf=2 00:16:42.484 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:42.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.484 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.484 issued rwts: total=15880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:42.484 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:42.484 00:16:42.484 Run status group 0 (all jobs): 00:16:42.484 READ: bw=61.1MiB/s (64.0MB/s), 13.3MiB/s-21.4MiB/s (14.0MB/s-22.4MB/s), io=230MiB (241MB), run=2905-3768msec 00:16:42.484 00:16:42.484 Disk stats (read/write): 00:16:42.484 nvme0n1: ios=11721/0, merge=0/0, ticks=3145/0, in_queue=3145, util=95.39% 00:16:42.484 nvme0n2: ios=19134/0, merge=0/0, ticks=3313/0, in_queue=3313, util=95.48% 00:16:42.484 nvme0n3: ios=10637/0, merge=0/0, ticks=2934/0, in_queue=2934, util=96.24% 00:16:42.484 nvme0n4: ios=15754/0, merge=0/0, ticks=2673/0, in_queue=2673, util=96.83% 00:16:42.484 13:30:47 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:42.484 13:30:47 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:42.743 13:30:48 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:42.743 13:30:48 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:43.001 13:30:48 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:43.001 13:30:48 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:43.260 13:30:48 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:43.260 13:30:48 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:43.519 13:30:49 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:43.519 13:30:49 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:43.777 13:30:49 -- target/fio.sh@69 -- # fio_status=0 00:16:43.777 13:30:49 -- target/fio.sh@70 -- # wait 87436 00:16:43.777 13:30:49 -- target/fio.sh@70 -- # fio_status=4 00:16:43.777 13:30:49 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:43.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.777 13:30:49 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:43.777 13:30:49 -- common/autotest_common.sh@1208 -- # local i=0 00:16:43.777 13:30:49 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:43.777 13:30:49 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.777 13:30:49 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:43.777 13:30:49 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.777 nvmf hotplug test: fio failed as expected 00:16:43.777 13:30:49 -- common/autotest_common.sh@1220 -- # return 0 00:16:43.777 13:30:49 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:43.777 13:30:49 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:43.778 13:30:49 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:44.036 13:30:49 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:44.036 13:30:49 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:44.036 13:30:49 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:44.036 13:30:49 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:44.036 13:30:49 -- target/fio.sh@91 -- # nvmftestfini 00:16:44.036 13:30:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:44.036 13:30:49 -- nvmf/common.sh@116 -- # sync 00:16:44.036 13:30:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:44.036 13:30:49 -- nvmf/common.sh@119 -- # set +e 00:16:44.036 13:30:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:44.036 13:30:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:44.036 rmmod nvme_tcp 00:16:44.036 rmmod nvme_fabrics 00:16:44.036 rmmod nvme_keyring 00:16:44.036 13:30:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:44.036 13:30:49 -- nvmf/common.sh@123 -- # set -e 00:16:44.036 13:30:49 -- nvmf/common.sh@124 -- # return 0 00:16:44.036 13:30:49 -- nvmf/common.sh@477 -- # '[' -n 86941 ']' 00:16:44.036 13:30:49 -- nvmf/common.sh@478 -- # killprocess 86941 00:16:44.036 13:30:49 -- common/autotest_common.sh@936 -- # '[' -z 86941 ']' 00:16:44.036 13:30:49 -- common/autotest_common.sh@940 -- # kill -0 86941 00:16:44.036 13:30:49 -- common/autotest_common.sh@941 -- # uname 00:16:44.036 13:30:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:44.036 13:30:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86941 00:16:44.036 killing process with pid 86941 00:16:44.036 13:30:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:44.036 13:30:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:44.036 13:30:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86941' 00:16:44.036 13:30:49 -- common/autotest_common.sh@955 -- # kill 86941 00:16:44.036 13:30:49 -- common/autotest_common.sh@960 -- # wait 86941 00:16:44.295 13:30:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:44.295 13:30:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:44.295 13:30:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:44.295 13:30:49 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:44.295 13:30:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:44.295 13:30:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.295 13:30:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.295 13:30:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.295 13:30:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:44.295 00:16:44.295 real 0m19.505s 00:16:44.295 user 1m14.033s 00:16:44.295 sys 0m9.394s 00:16:44.295 13:30:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:44.295 13:30:49 -- common/autotest_common.sh@10 -- # set +x 00:16:44.295 ************************************ 00:16:44.295 END TEST nvmf_fio_target 00:16:44.295 ************************************ 00:16:44.295 13:30:49 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:44.295 13:30:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:44.295 13:30:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:44.295 13:30:49 -- common/autotest_common.sh@10 -- # set +x 00:16:44.295 ************************************ 00:16:44.295 START TEST nvmf_bdevio 00:16:44.295 ************************************ 00:16:44.295 13:30:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:44.554 * Looking for test storage... 00:16:44.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:44.554 13:30:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:44.554 13:30:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:44.554 13:30:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:44.554 13:30:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:44.554 13:30:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:44.554 13:30:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:44.554 13:30:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:44.554 13:30:50 -- scripts/common.sh@335 -- # IFS=.-: 00:16:44.554 13:30:50 -- scripts/common.sh@335 -- # read -ra ver1 00:16:44.554 13:30:50 -- scripts/common.sh@336 -- # IFS=.-: 00:16:44.554 13:30:50 -- scripts/common.sh@336 -- # read -ra ver2 00:16:44.554 13:30:50 -- scripts/common.sh@337 -- # local 'op=<' 00:16:44.554 13:30:50 -- scripts/common.sh@339 -- # ver1_l=2 00:16:44.554 13:30:50 -- scripts/common.sh@340 -- # ver2_l=1 00:16:44.554 13:30:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:44.554 13:30:50 -- scripts/common.sh@343 -- # case "$op" in 00:16:44.554 13:30:50 -- scripts/common.sh@344 -- # : 1 00:16:44.555 13:30:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:44.555 13:30:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:44.555 13:30:50 -- scripts/common.sh@364 -- # decimal 1 00:16:44.555 13:30:50 -- scripts/common.sh@352 -- # local d=1 00:16:44.555 13:30:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:44.555 13:30:50 -- scripts/common.sh@354 -- # echo 1 00:16:44.555 13:30:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:44.555 13:30:50 -- scripts/common.sh@365 -- # decimal 2 00:16:44.555 13:30:50 -- scripts/common.sh@352 -- # local d=2 00:16:44.555 13:30:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:44.555 13:30:50 -- scripts/common.sh@354 -- # echo 2 00:16:44.555 13:30:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:44.555 13:30:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:44.555 13:30:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:44.555 13:30:50 -- scripts/common.sh@367 -- # return 0 00:16:44.555 13:30:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:44.555 13:30:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:44.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.555 --rc genhtml_branch_coverage=1 00:16:44.555 --rc genhtml_function_coverage=1 00:16:44.555 --rc genhtml_legend=1 00:16:44.555 --rc geninfo_all_blocks=1 00:16:44.555 --rc geninfo_unexecuted_blocks=1 00:16:44.555 00:16:44.555 ' 00:16:44.555 13:30:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:44.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.555 --rc genhtml_branch_coverage=1 00:16:44.555 --rc genhtml_function_coverage=1 00:16:44.555 --rc genhtml_legend=1 00:16:44.555 --rc geninfo_all_blocks=1 00:16:44.555 --rc geninfo_unexecuted_blocks=1 00:16:44.555 00:16:44.555 ' 00:16:44.555 13:30:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:44.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.555 --rc genhtml_branch_coverage=1 00:16:44.555 --rc genhtml_function_coverage=1 00:16:44.555 --rc genhtml_legend=1 00:16:44.555 --rc geninfo_all_blocks=1 00:16:44.555 --rc geninfo_unexecuted_blocks=1 00:16:44.555 00:16:44.555 ' 00:16:44.555 13:30:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:44.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.555 --rc genhtml_branch_coverage=1 00:16:44.555 --rc genhtml_function_coverage=1 00:16:44.555 --rc genhtml_legend=1 00:16:44.555 --rc geninfo_all_blocks=1 00:16:44.555 --rc geninfo_unexecuted_blocks=1 00:16:44.555 00:16:44.555 ' 00:16:44.555 13:30:50 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:44.555 13:30:50 -- nvmf/common.sh@7 -- # uname -s 00:16:44.555 13:30:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:44.555 13:30:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:44.555 13:30:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:44.555 13:30:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:44.555 13:30:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:44.555 13:30:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:44.555 13:30:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:44.555 13:30:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:44.555 13:30:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:44.555 13:30:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:44.555 13:30:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:16:44.555 13:30:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:16:44.555 13:30:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:44.555 13:30:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:44.555 13:30:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:44.555 13:30:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:44.555 13:30:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:44.555 13:30:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:44.555 13:30:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:44.555 13:30:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.555 13:30:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.555 13:30:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.555 13:30:50 -- paths/export.sh@5 -- # export PATH 00:16:44.555 13:30:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.555 13:30:50 -- nvmf/common.sh@46 -- # : 0 00:16:44.555 13:30:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:44.555 13:30:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:44.555 13:30:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:44.555 13:30:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:44.555 13:30:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:44.555 13:30:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:44.555 13:30:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:44.555 13:30:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:44.555 13:30:50 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:44.555 13:30:50 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:44.555 13:30:50 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:44.555 13:30:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:44.555 13:30:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:44.555 13:30:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:44.555 13:30:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:44.555 13:30:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:44.555 13:30:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.555 13:30:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.555 13:30:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.555 13:30:50 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:44.555 13:30:50 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:44.555 13:30:50 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:44.555 13:30:50 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:44.555 13:30:50 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:44.555 13:30:50 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:44.555 13:30:50 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:44.555 13:30:50 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:44.555 13:30:50 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:44.555 13:30:50 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:44.555 13:30:50 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:44.555 13:30:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:44.555 13:30:50 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:44.555 13:30:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:44.555 13:30:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:44.555 13:30:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:44.555 13:30:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:44.555 13:30:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:44.555 13:30:50 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:44.555 13:30:50 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:44.555 Cannot find device "nvmf_tgt_br" 00:16:44.555 13:30:50 -- nvmf/common.sh@154 -- # true 00:16:44.555 13:30:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:44.555 Cannot find device "nvmf_tgt_br2" 00:16:44.555 13:30:50 -- nvmf/common.sh@155 -- # true 00:16:44.555 13:30:50 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:44.555 13:30:50 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:44.555 Cannot find device "nvmf_tgt_br" 00:16:44.555 13:30:50 -- nvmf/common.sh@157 -- # true 00:16:44.555 13:30:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:44.555 Cannot find device "nvmf_tgt_br2" 00:16:44.555 13:30:50 -- nvmf/common.sh@158 -- # true 00:16:44.555 13:30:50 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:44.814 13:30:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:44.814 13:30:50 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:44.814 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:44.814 13:30:50 -- nvmf/common.sh@161 -- # true 00:16:44.814 13:30:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:44.814 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:44.814 13:30:50 -- nvmf/common.sh@162 -- # true 00:16:44.814 13:30:50 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:44.814 13:30:50 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:44.814 13:30:50 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:44.814 13:30:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:44.814 13:30:50 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:44.814 13:30:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:44.814 13:30:50 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:44.814 13:30:50 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:44.814 13:30:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:44.814 13:30:50 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:44.814 13:30:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:44.814 13:30:50 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:44.814 13:30:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:44.814 13:30:50 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:44.814 13:30:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:44.814 13:30:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:44.814 13:30:50 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:44.814 13:30:50 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:44.814 13:30:50 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:44.814 13:30:50 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:44.814 13:30:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:44.814 13:30:50 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:44.814 13:30:50 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:44.814 13:30:50 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:44.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:44.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:16:44.815 00:16:44.815 --- 10.0.0.2 ping statistics --- 00:16:44.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.815 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:16:44.815 13:30:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:44.815 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:44.815 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:16:44.815 00:16:44.815 --- 10.0.0.3 ping statistics --- 00:16:44.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.815 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:16:44.815 13:30:50 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:44.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:44.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.015 ms 00:16:44.815 00:16:44.815 --- 10.0.0.1 ping statistics --- 00:16:44.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.815 rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms 00:16:44.815 13:30:50 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:44.815 13:30:50 -- nvmf/common.sh@421 -- # return 0 00:16:44.815 13:30:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:44.815 13:30:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:44.815 13:30:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:44.815 13:30:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:44.815 13:30:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:44.815 13:30:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:44.815 13:30:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:45.073 13:30:50 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:45.074 13:30:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:45.074 13:30:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:45.074 13:30:50 -- common/autotest_common.sh@10 -- # set +x 00:16:45.074 13:30:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:45.074 13:30:50 -- nvmf/common.sh@469 -- # nvmfpid=87809 00:16:45.074 13:30:50 -- nvmf/common.sh@470 -- # waitforlisten 87809 00:16:45.074 13:30:50 -- common/autotest_common.sh@829 -- # '[' -z 87809 ']' 00:16:45.074 13:30:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.074 13:30:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:45.074 13:30:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.074 13:30:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:45.074 13:30:50 -- common/autotest_common.sh@10 -- # set +x 00:16:45.074 [2024-12-15 13:30:50.561983] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:45.074 [2024-12-15 13:30:50.562059] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.074 [2024-12-15 13:30:50.692190] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:45.074 [2024-12-15 13:30:50.750055] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:45.074 [2024-12-15 13:30:50.750202] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:45.074 [2024-12-15 13:30:50.750213] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:45.074 [2024-12-15 13:30:50.750221] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:45.074 [2024-12-15 13:30:50.751178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:45.074 [2024-12-15 13:30:50.751335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:45.074 [2024-12-15 13:30:50.751456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:45.074 [2024-12-15 13:30:50.751460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:46.009 13:30:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:46.009 13:30:51 -- common/autotest_common.sh@862 -- # return 0 00:16:46.009 13:30:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:46.009 13:30:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:46.009 13:30:51 -- common/autotest_common.sh@10 -- # set +x 00:16:46.010 13:30:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.010 13:30:51 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:46.010 13:30:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.010 13:30:51 -- common/autotest_common.sh@10 -- # set +x 00:16:46.010 [2024-12-15 13:30:51.632695] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:46.010 13:30:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.010 13:30:51 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:46.010 13:30:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.010 13:30:51 -- common/autotest_common.sh@10 -- # set +x 00:16:46.010 Malloc0 00:16:46.010 13:30:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.010 13:30:51 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:46.010 13:30:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.010 13:30:51 -- common/autotest_common.sh@10 -- # set +x 00:16:46.010 13:30:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.010 13:30:51 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:46.010 13:30:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.010 13:30:51 -- common/autotest_common.sh@10 -- # set +x 00:16:46.268 13:30:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.268 13:30:51 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:46.268 13:30:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.268 13:30:51 -- common/autotest_common.sh@10 -- # set +x 00:16:46.268 [2024-12-15 13:30:51.705313] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.268 13:30:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.268 13:30:51 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:46.268 13:30:51 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:46.268 13:30:51 -- nvmf/common.sh@520 -- # config=() 00:16:46.268 13:30:51 -- nvmf/common.sh@520 -- # local subsystem config 00:16:46.268 13:30:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:46.268 13:30:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:46.268 { 00:16:46.268 "params": { 00:16:46.268 "name": "Nvme$subsystem", 00:16:46.268 "trtype": "$TEST_TRANSPORT", 00:16:46.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:46.268 "adrfam": "ipv4", 00:16:46.269 "trsvcid": "$NVMF_PORT", 00:16:46.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:46.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:46.269 "hdgst": ${hdgst:-false}, 00:16:46.269 "ddgst": ${ddgst:-false} 00:16:46.269 }, 00:16:46.269 "method": "bdev_nvme_attach_controller" 00:16:46.269 } 00:16:46.269 EOF 00:16:46.269 )") 00:16:46.269 13:30:51 -- nvmf/common.sh@542 -- # cat 00:16:46.269 13:30:51 -- nvmf/common.sh@544 -- # jq . 00:16:46.269 13:30:51 -- nvmf/common.sh@545 -- # IFS=, 00:16:46.269 13:30:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:46.269 "params": { 00:16:46.269 "name": "Nvme1", 00:16:46.269 "trtype": "tcp", 00:16:46.269 "traddr": "10.0.0.2", 00:16:46.269 "adrfam": "ipv4", 00:16:46.269 "trsvcid": "4420", 00:16:46.269 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:46.269 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:46.269 "hdgst": false, 00:16:46.269 "ddgst": false 00:16:46.269 }, 00:16:46.269 "method": "bdev_nvme_attach_controller" 00:16:46.269 }' 00:16:46.269 [2024-12-15 13:30:51.758036] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:46.269 [2024-12-15 13:30:51.758117] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87863 ] 00:16:46.269 [2024-12-15 13:30:51.895874] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:46.527 [2024-12-15 13:30:51.963954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.527 [2024-12-15 13:30:51.964111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.527 [2024-12-15 13:30:51.964112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.527 [2024-12-15 13:30:52.133814] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:46.527 [2024-12-15 13:30:52.133858] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:46.527 I/O targets: 00:16:46.527 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:46.527 00:16:46.527 00:16:46.527 CUnit - A unit testing framework for C - Version 2.1-3 00:16:46.527 http://cunit.sourceforge.net/ 00:16:46.527 00:16:46.527 00:16:46.527 Suite: bdevio tests on: Nvme1n1 00:16:46.527 Test: blockdev write read block ...passed 00:16:46.786 Test: blockdev write zeroes read block ...passed 00:16:46.786 Test: blockdev write zeroes read no split ...passed 00:16:46.786 Test: blockdev write zeroes read split ...passed 00:16:46.786 Test: blockdev write zeroes read split partial ...passed 00:16:46.786 Test: blockdev reset ...[2024-12-15 13:30:52.248398] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:46.786 [2024-12-15 13:30:52.248505] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dfded0 (9): Bad file descriptor 00:16:46.786 [2024-12-15 13:30:52.261562] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:46.786 passed 00:16:46.786 Test: blockdev write read 8 blocks ...passed 00:16:46.786 Test: blockdev write read size > 128k ...passed 00:16:46.786 Test: blockdev write read invalid size ...passed 00:16:46.786 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:46.786 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:46.786 Test: blockdev write read max offset ...passed 00:16:46.786 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:46.786 Test: blockdev writev readv 8 blocks ...passed 00:16:46.786 Test: blockdev writev readv 30 x 1block ...passed 00:16:46.786 Test: blockdev writev readv block ...passed 00:16:46.786 Test: blockdev writev readv size > 128k ...passed 00:16:46.786 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:46.786 Test: blockdev comparev and writev ...[2024-12-15 13:30:52.434699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.787 [2024-12-15 13:30:52.434892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.787 [2024-12-15 13:30:52.435000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.787 [2024-12-15 13:30:52.435083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.787 [2024-12-15 13:30:52.435486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.787 [2024-12-15 13:30:52.436532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:46.787 [2024-12-15 13:30:52.436797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.787 [2024-12-15 13:30:52.436898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:46.787 [2024-12-15 13:30:52.437263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.787 [2024-12-15 13:30:52.437352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:46.787 [2024-12-15 13:30:52.437427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.787 [2024-12-15 13:30:52.437509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:46.787 [2024-12-15 13:30:52.437909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.787 [2024-12-15 13:30:52.438010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:46.787 [2024-12-15 13:30:52.438089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.787 [2024-12-15 13:30:52.438155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:47.046 passed 00:16:47.046 Test: blockdev nvme passthru rw ...passed 00:16:47.046 Test: blockdev nvme passthru vendor specific ...[2024-12-15 13:30:52.519881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:47.046 [2024-12-15 13:30:52.520012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:47.046 [2024-12-15 13:30:52.520201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:47.046 [2024-12-15 13:30:52.520301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:47.046 [2024-12-15 13:30:52.520484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:47.046 [2024-12-15 13:30:52.520582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:47.046 [2024-12-15 13:30:52.520791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:47.046 [2024-12-15 13:30:52.520877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:47.046 passed 00:16:47.046 Test: blockdev nvme admin passthru ...passed 00:16:47.046 Test: blockdev copy ...passed 00:16:47.046 00:16:47.046 Run Summary: Type Total Ran Passed Failed Inactive 00:16:47.046 suites 1 1 n/a 0 0 00:16:47.046 tests 23 23 23 0 0 00:16:47.046 asserts 152 152 152 0 n/a 00:16:47.046 00:16:47.046 Elapsed time = 0.892 seconds 00:16:47.305 13:30:52 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:47.305 13:30:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.305 13:30:52 -- common/autotest_common.sh@10 -- # set +x 00:16:47.305 13:30:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.305 13:30:52 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:47.305 13:30:52 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:47.305 13:30:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:47.305 13:30:52 -- nvmf/common.sh@116 -- # sync 00:16:47.305 13:30:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:47.305 13:30:52 -- nvmf/common.sh@119 -- # set +e 00:16:47.305 13:30:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:47.305 13:30:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:47.305 rmmod nvme_tcp 00:16:47.305 rmmod nvme_fabrics 00:16:47.305 rmmod nvme_keyring 00:16:47.305 13:30:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:47.305 13:30:52 -- nvmf/common.sh@123 -- # set -e 00:16:47.305 13:30:52 -- nvmf/common.sh@124 -- # return 0 00:16:47.305 13:30:52 -- nvmf/common.sh@477 -- # '[' -n 87809 ']' 00:16:47.305 13:30:52 -- nvmf/common.sh@478 -- # killprocess 87809 00:16:47.305 13:30:52 -- common/autotest_common.sh@936 -- # '[' -z 87809 ']' 00:16:47.305 13:30:52 -- common/autotest_common.sh@940 -- # kill -0 87809 00:16:47.305 13:30:52 -- common/autotest_common.sh@941 -- # uname 00:16:47.305 13:30:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:47.305 13:30:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87809 00:16:47.305 13:30:52 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:47.305 13:30:52 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:47.305 13:30:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87809' 00:16:47.305 killing process with pid 87809 00:16:47.305 13:30:52 -- common/autotest_common.sh@955 -- # kill 87809 00:16:47.305 13:30:52 -- common/autotest_common.sh@960 -- # wait 87809 00:16:47.563 13:30:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:47.563 13:30:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:47.563 13:30:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:47.563 13:30:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:47.563 13:30:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:47.563 13:30:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.563 13:30:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:47.563 13:30:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.563 13:30:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:47.563 00:16:47.563 real 0m3.219s 00:16:47.563 user 0m11.574s 00:16:47.563 sys 0m0.787s 00:16:47.563 13:30:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:47.563 ************************************ 00:16:47.563 13:30:53 -- common/autotest_common.sh@10 -- # set +x 00:16:47.563 END TEST nvmf_bdevio 00:16:47.563 ************************************ 00:16:47.563 13:30:53 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:16:47.563 13:30:53 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:47.563 13:30:53 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:47.563 13:30:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:47.563 13:30:53 -- common/autotest_common.sh@10 -- # set +x 00:16:47.563 ************************************ 00:16:47.563 START TEST nvmf_bdevio_no_huge 00:16:47.563 ************************************ 00:16:47.563 13:30:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:47.823 * Looking for test storage... 00:16:47.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:47.823 13:30:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:47.823 13:30:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:47.823 13:30:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:47.823 13:30:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:47.823 13:30:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:47.823 13:30:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:47.823 13:30:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:47.823 13:30:53 -- scripts/common.sh@335 -- # IFS=.-: 00:16:47.823 13:30:53 -- scripts/common.sh@335 -- # read -ra ver1 00:16:47.823 13:30:53 -- scripts/common.sh@336 -- # IFS=.-: 00:16:47.823 13:30:53 -- scripts/common.sh@336 -- # read -ra ver2 00:16:47.823 13:30:53 -- scripts/common.sh@337 -- # local 'op=<' 00:16:47.823 13:30:53 -- scripts/common.sh@339 -- # ver1_l=2 00:16:47.823 13:30:53 -- scripts/common.sh@340 -- # ver2_l=1 00:16:47.823 13:30:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:47.823 13:30:53 -- scripts/common.sh@343 -- # case "$op" in 00:16:47.823 13:30:53 -- scripts/common.sh@344 -- # : 1 00:16:47.823 13:30:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:47.823 13:30:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:47.823 13:30:53 -- scripts/common.sh@364 -- # decimal 1 00:16:47.823 13:30:53 -- scripts/common.sh@352 -- # local d=1 00:16:47.823 13:30:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:47.823 13:30:53 -- scripts/common.sh@354 -- # echo 1 00:16:47.823 13:30:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:47.823 13:30:53 -- scripts/common.sh@365 -- # decimal 2 00:16:47.823 13:30:53 -- scripts/common.sh@352 -- # local d=2 00:16:47.823 13:30:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:47.823 13:30:53 -- scripts/common.sh@354 -- # echo 2 00:16:47.823 13:30:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:47.823 13:30:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:47.823 13:30:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:47.823 13:30:53 -- scripts/common.sh@367 -- # return 0 00:16:47.823 13:30:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:47.823 13:30:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:47.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.823 --rc genhtml_branch_coverage=1 00:16:47.823 --rc genhtml_function_coverage=1 00:16:47.823 --rc genhtml_legend=1 00:16:47.823 --rc geninfo_all_blocks=1 00:16:47.823 --rc geninfo_unexecuted_blocks=1 00:16:47.823 00:16:47.823 ' 00:16:47.823 13:30:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:47.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.823 --rc genhtml_branch_coverage=1 00:16:47.823 --rc genhtml_function_coverage=1 00:16:47.823 --rc genhtml_legend=1 00:16:47.823 --rc geninfo_all_blocks=1 00:16:47.823 --rc geninfo_unexecuted_blocks=1 00:16:47.823 00:16:47.823 ' 00:16:47.823 13:30:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:47.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.823 --rc genhtml_branch_coverage=1 00:16:47.823 --rc genhtml_function_coverage=1 00:16:47.823 --rc genhtml_legend=1 00:16:47.823 --rc geninfo_all_blocks=1 00:16:47.823 --rc geninfo_unexecuted_blocks=1 00:16:47.823 00:16:47.823 ' 00:16:47.823 13:30:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:47.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.823 --rc genhtml_branch_coverage=1 00:16:47.823 --rc genhtml_function_coverage=1 00:16:47.823 --rc genhtml_legend=1 00:16:47.823 --rc geninfo_all_blocks=1 00:16:47.823 --rc geninfo_unexecuted_blocks=1 00:16:47.823 00:16:47.823 ' 00:16:47.823 13:30:53 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:47.823 13:30:53 -- nvmf/common.sh@7 -- # uname -s 00:16:47.823 13:30:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:47.823 13:30:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:47.823 13:30:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:47.823 13:30:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:47.823 13:30:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:47.823 13:30:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:47.823 13:30:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:47.823 13:30:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:47.823 13:30:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:47.823 13:30:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:47.823 13:30:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:16:47.823 13:30:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:16:47.823 13:30:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:47.823 13:30:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:47.823 13:30:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:47.823 13:30:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:47.823 13:30:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:47.823 13:30:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:47.823 13:30:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:47.823 13:30:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.823 13:30:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.823 13:30:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.823 13:30:53 -- paths/export.sh@5 -- # export PATH 00:16:47.823 13:30:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.823 13:30:53 -- nvmf/common.sh@46 -- # : 0 00:16:47.823 13:30:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:47.823 13:30:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:47.823 13:30:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:47.823 13:30:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:47.823 13:30:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:47.823 13:30:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:47.823 13:30:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:47.823 13:30:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:47.823 13:30:53 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:47.823 13:30:53 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:47.823 13:30:53 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:47.823 13:30:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:47.823 13:30:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:47.824 13:30:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:47.824 13:30:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:47.824 13:30:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:47.824 13:30:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.824 13:30:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:47.824 13:30:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.824 13:30:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:47.824 13:30:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:47.824 13:30:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:47.824 13:30:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:47.824 13:30:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:47.824 13:30:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:47.824 13:30:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:47.824 13:30:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:47.824 13:30:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:47.824 13:30:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:47.824 13:30:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:47.824 13:30:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:47.824 13:30:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:47.824 13:30:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:47.824 13:30:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:47.824 13:30:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:47.824 13:30:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:47.824 13:30:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:47.824 13:30:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:47.824 13:30:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:47.824 Cannot find device "nvmf_tgt_br" 00:16:47.824 13:30:53 -- nvmf/common.sh@154 -- # true 00:16:47.824 13:30:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:47.824 Cannot find device "nvmf_tgt_br2" 00:16:47.824 13:30:53 -- nvmf/common.sh@155 -- # true 00:16:47.824 13:30:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:47.824 13:30:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:47.824 Cannot find device "nvmf_tgt_br" 00:16:47.824 13:30:53 -- nvmf/common.sh@157 -- # true 00:16:47.824 13:30:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:47.824 Cannot find device "nvmf_tgt_br2" 00:16:47.824 13:30:53 -- nvmf/common.sh@158 -- # true 00:16:47.824 13:30:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:48.082 13:30:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:48.082 13:30:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:48.082 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:48.082 13:30:53 -- nvmf/common.sh@161 -- # true 00:16:48.082 13:30:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:48.082 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:48.082 13:30:53 -- nvmf/common.sh@162 -- # true 00:16:48.082 13:30:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:48.082 13:30:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:48.082 13:30:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:48.082 13:30:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:48.082 13:30:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:48.082 13:30:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:48.082 13:30:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:48.082 13:30:53 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:48.082 13:30:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:48.082 13:30:53 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:48.082 13:30:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:48.082 13:30:53 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:48.082 13:30:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:48.082 13:30:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:48.083 13:30:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:48.083 13:30:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:48.083 13:30:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:48.083 13:30:53 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:48.083 13:30:53 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:48.083 13:30:53 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:48.083 13:30:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:48.083 13:30:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:48.083 13:30:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:48.083 13:30:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:48.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:48.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:16:48.083 00:16:48.083 --- 10.0.0.2 ping statistics --- 00:16:48.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.083 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:48.083 13:30:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:48.083 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:48.083 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:16:48.083 00:16:48.083 --- 10.0.0.3 ping statistics --- 00:16:48.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.083 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:16:48.083 13:30:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:48.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:48.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:16:48.083 00:16:48.083 --- 10.0.0.1 ping statistics --- 00:16:48.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.083 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:16:48.083 13:30:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:48.083 13:30:53 -- nvmf/common.sh@421 -- # return 0 00:16:48.083 13:30:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:48.083 13:30:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:48.083 13:30:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:48.083 13:30:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:48.083 13:30:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:48.083 13:30:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:48.083 13:30:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:48.083 13:30:53 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:48.083 13:30:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:48.083 13:30:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:48.083 13:30:53 -- common/autotest_common.sh@10 -- # set +x 00:16:48.083 13:30:53 -- nvmf/common.sh@469 -- # nvmfpid=88049 00:16:48.083 13:30:53 -- nvmf/common.sh@470 -- # waitforlisten 88049 00:16:48.083 13:30:53 -- common/autotest_common.sh@829 -- # '[' -z 88049 ']' 00:16:48.083 13:30:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.083 13:30:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:48.083 13:30:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:48.083 13:30:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.083 13:30:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:48.083 13:30:53 -- common/autotest_common.sh@10 -- # set +x 00:16:48.342 [2024-12-15 13:30:53.797554] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:48.342 [2024-12-15 13:30:53.797658] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:48.342 [2024-12-15 13:30:53.946646] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:48.600 [2024-12-15 13:30:54.035024] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:48.600 [2024-12-15 13:30:54.035165] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:48.600 [2024-12-15 13:30:54.035177] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:48.600 [2024-12-15 13:30:54.035184] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:48.600 [2024-12-15 13:30:54.035324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:48.600 [2024-12-15 13:30:54.036073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:48.600 [2024-12-15 13:30:54.036220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:48.600 [2024-12-15 13:30:54.036414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:49.167 13:30:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:49.167 13:30:54 -- common/autotest_common.sh@862 -- # return 0 00:16:49.167 13:30:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:49.167 13:30:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:49.167 13:30:54 -- common/autotest_common.sh@10 -- # set +x 00:16:49.167 13:30:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.167 13:30:54 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:49.167 13:30:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.167 13:30:54 -- common/autotest_common.sh@10 -- # set +x 00:16:49.167 [2024-12-15 13:30:54.802544] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:49.167 13:30:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.167 13:30:54 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:49.167 13:30:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.167 13:30:54 -- common/autotest_common.sh@10 -- # set +x 00:16:49.167 Malloc0 00:16:49.167 13:30:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.167 13:30:54 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:49.167 13:30:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.167 13:30:54 -- common/autotest_common.sh@10 -- # set +x 00:16:49.167 13:30:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.167 13:30:54 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:49.167 13:30:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.167 13:30:54 -- common/autotest_common.sh@10 -- # set +x 00:16:49.167 13:30:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.168 13:30:54 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:49.168 13:30:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.168 13:30:54 -- common/autotest_common.sh@10 -- # set +x 00:16:49.168 [2024-12-15 13:30:54.840857] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.168 13:30:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.168 13:30:54 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:49.168 13:30:54 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:49.168 13:30:54 -- nvmf/common.sh@520 -- # config=() 00:16:49.168 13:30:54 -- nvmf/common.sh@520 -- # local subsystem config 00:16:49.168 13:30:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:49.168 13:30:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:49.168 { 00:16:49.168 "params": { 00:16:49.168 "name": "Nvme$subsystem", 00:16:49.168 "trtype": "$TEST_TRANSPORT", 00:16:49.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:49.168 "adrfam": "ipv4", 00:16:49.168 "trsvcid": "$NVMF_PORT", 00:16:49.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:49.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:49.168 "hdgst": ${hdgst:-false}, 00:16:49.168 "ddgst": ${ddgst:-false} 00:16:49.168 }, 00:16:49.168 "method": "bdev_nvme_attach_controller" 00:16:49.168 } 00:16:49.168 EOF 00:16:49.168 )") 00:16:49.168 13:30:54 -- nvmf/common.sh@542 -- # cat 00:16:49.168 13:30:54 -- nvmf/common.sh@544 -- # jq . 00:16:49.426 13:30:54 -- nvmf/common.sh@545 -- # IFS=, 00:16:49.426 13:30:54 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:49.426 "params": { 00:16:49.426 "name": "Nvme1", 00:16:49.426 "trtype": "tcp", 00:16:49.426 "traddr": "10.0.0.2", 00:16:49.426 "adrfam": "ipv4", 00:16:49.426 "trsvcid": "4420", 00:16:49.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:49.426 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:49.426 "hdgst": false, 00:16:49.426 "ddgst": false 00:16:49.426 }, 00:16:49.426 "method": "bdev_nvme_attach_controller" 00:16:49.426 }' 00:16:49.426 [2024-12-15 13:30:54.899692] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:49.426 [2024-12-15 13:30:54.899772] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid88103 ] 00:16:49.426 [2024-12-15 13:30:55.039274] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:49.684 [2024-12-15 13:30:55.171029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.684 [2024-12-15 13:30:55.171171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:49.684 [2024-12-15 13:30:55.171487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.684 [2024-12-15 13:30:55.364017] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:49.684 [2024-12-15 13:30:55.364217] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:49.684 I/O targets: 00:16:49.684 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:49.684 00:16:49.684 00:16:49.684 CUnit - A unit testing framework for C - Version 2.1-3 00:16:49.684 http://cunit.sourceforge.net/ 00:16:49.684 00:16:49.684 00:16:49.684 Suite: bdevio tests on: Nvme1n1 00:16:49.942 Test: blockdev write read block ...passed 00:16:49.942 Test: blockdev write zeroes read block ...passed 00:16:49.942 Test: blockdev write zeroes read no split ...passed 00:16:49.942 Test: blockdev write zeroes read split ...passed 00:16:49.942 Test: blockdev write zeroes read split partial ...passed 00:16:49.942 Test: blockdev reset ...[2024-12-15 13:30:55.488951] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:49.942 [2024-12-15 13:30:55.489168] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc88820 (9): Bad file descriptor 00:16:49.942 [2024-12-15 13:30:55.503319] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:49.942 passed 00:16:49.942 Test: blockdev write read 8 blocks ...passed 00:16:49.942 Test: blockdev write read size > 128k ...passed 00:16:49.942 Test: blockdev write read invalid size ...passed 00:16:49.942 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:49.942 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:49.942 Test: blockdev write read max offset ...passed 00:16:50.201 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:50.201 Test: blockdev writev readv 8 blocks ...passed 00:16:50.201 Test: blockdev writev readv 30 x 1block ...passed 00:16:50.201 Test: blockdev writev readv block ...passed 00:16:50.201 Test: blockdev writev readv size > 128k ...passed 00:16:50.201 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:50.201 Test: blockdev comparev and writev ...[2024-12-15 13:30:55.678059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:50.201 [2024-12-15 13:30:55.678130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:50.201 [2024-12-15 13:30:55.678165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:50.201 [2024-12-15 13:30:55.678176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:50.201 [2024-12-15 13:30:55.678542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:50.201 [2024-12-15 13:30:55.678569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:50.201 [2024-12-15 13:30:55.678598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:50.201 [2024-12-15 13:30:55.678610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:50.201 [2024-12-15 13:30:55.679090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:50.201 [2024-12-15 13:30:55.679121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:50.201 [2024-12-15 13:30:55.679138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:50.201 [2024-12-15 13:30:55.679148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:50.201 [2024-12-15 13:30:55.679509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:50.201 [2024-12-15 13:30:55.679539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:50.201 [2024-12-15 13:30:55.679557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:50.201 [2024-12-15 13:30:55.679566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:50.201 passed 00:16:50.201 Test: blockdev nvme passthru rw ...passed 00:16:50.201 Test: blockdev nvme passthru vendor specific ...[2024-12-15 13:30:55.763927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:50.201 [2024-12-15 13:30:55.763957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:50.201 [2024-12-15 13:30:55.764079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:50.201 [2024-12-15 13:30:55.764097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:50.201 [2024-12-15 13:30:55.764220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:50.201 [2024-12-15 13:30:55.764237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:50.201 [2024-12-15 13:30:55.764352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:50.201 [2024-12-15 13:30:55.764369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:50.201 passed 00:16:50.201 Test: blockdev nvme admin passthru ...passed 00:16:50.201 Test: blockdev copy ...passed 00:16:50.201 00:16:50.201 Run Summary: Type Total Ran Passed Failed Inactive 00:16:50.201 suites 1 1 n/a 0 0 00:16:50.201 tests 23 23 23 0 0 00:16:50.201 asserts 152 152 152 0 n/a 00:16:50.201 00:16:50.201 Elapsed time = 0.917 seconds 00:16:50.769 13:30:56 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:50.769 13:30:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.769 13:30:56 -- common/autotest_common.sh@10 -- # set +x 00:16:50.769 13:30:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.769 13:30:56 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:50.769 13:30:56 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:50.769 13:30:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:50.769 13:30:56 -- nvmf/common.sh@116 -- # sync 00:16:50.769 13:30:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:50.769 13:30:56 -- nvmf/common.sh@119 -- # set +e 00:16:50.769 13:30:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:50.769 13:30:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:50.769 rmmod nvme_tcp 00:16:50.769 rmmod nvme_fabrics 00:16:50.769 rmmod nvme_keyring 00:16:50.769 13:30:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:50.769 13:30:56 -- nvmf/common.sh@123 -- # set -e 00:16:50.769 13:30:56 -- nvmf/common.sh@124 -- # return 0 00:16:50.769 13:30:56 -- nvmf/common.sh@477 -- # '[' -n 88049 ']' 00:16:50.769 13:30:56 -- nvmf/common.sh@478 -- # killprocess 88049 00:16:50.769 13:30:56 -- common/autotest_common.sh@936 -- # '[' -z 88049 ']' 00:16:50.769 13:30:56 -- common/autotest_common.sh@940 -- # kill -0 88049 00:16:50.769 13:30:56 -- common/autotest_common.sh@941 -- # uname 00:16:50.769 13:30:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:50.769 13:30:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88049 00:16:50.769 13:30:56 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:50.769 13:30:56 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:50.769 killing process with pid 88049 00:16:50.769 13:30:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88049' 00:16:50.769 13:30:56 -- common/autotest_common.sh@955 -- # kill 88049 00:16:50.769 13:30:56 -- common/autotest_common.sh@960 -- # wait 88049 00:16:51.028 13:30:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:51.028 13:30:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:51.028 13:30:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:51.028 13:30:56 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:51.028 13:30:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:51.028 13:30:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.028 13:30:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:51.028 13:30:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.287 13:30:56 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:51.287 00:16:51.287 real 0m3.491s 00:16:51.287 user 0m12.570s 00:16:51.287 sys 0m1.282s 00:16:51.287 13:30:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:51.287 13:30:56 -- common/autotest_common.sh@10 -- # set +x 00:16:51.287 ************************************ 00:16:51.287 END TEST nvmf_bdevio_no_huge 00:16:51.287 ************************************ 00:16:51.287 13:30:56 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:51.287 13:30:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:51.287 13:30:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:51.287 13:30:56 -- common/autotest_common.sh@10 -- # set +x 00:16:51.287 ************************************ 00:16:51.287 START TEST nvmf_tls 00:16:51.287 ************************************ 00:16:51.287 13:30:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:51.287 * Looking for test storage... 00:16:51.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:51.287 13:30:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:51.287 13:30:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:51.287 13:30:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:51.287 13:30:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:51.287 13:30:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:51.287 13:30:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:51.287 13:30:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:51.287 13:30:56 -- scripts/common.sh@335 -- # IFS=.-: 00:16:51.287 13:30:56 -- scripts/common.sh@335 -- # read -ra ver1 00:16:51.287 13:30:56 -- scripts/common.sh@336 -- # IFS=.-: 00:16:51.287 13:30:56 -- scripts/common.sh@336 -- # read -ra ver2 00:16:51.287 13:30:56 -- scripts/common.sh@337 -- # local 'op=<' 00:16:51.287 13:30:56 -- scripts/common.sh@339 -- # ver1_l=2 00:16:51.287 13:30:56 -- scripts/common.sh@340 -- # ver2_l=1 00:16:51.287 13:30:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:51.287 13:30:56 -- scripts/common.sh@343 -- # case "$op" in 00:16:51.287 13:30:56 -- scripts/common.sh@344 -- # : 1 00:16:51.287 13:30:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:51.287 13:30:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:51.287 13:30:56 -- scripts/common.sh@364 -- # decimal 1 00:16:51.287 13:30:56 -- scripts/common.sh@352 -- # local d=1 00:16:51.287 13:30:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:51.287 13:30:56 -- scripts/common.sh@354 -- # echo 1 00:16:51.287 13:30:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:51.287 13:30:56 -- scripts/common.sh@365 -- # decimal 2 00:16:51.287 13:30:56 -- scripts/common.sh@352 -- # local d=2 00:16:51.287 13:30:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:51.287 13:30:56 -- scripts/common.sh@354 -- # echo 2 00:16:51.287 13:30:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:51.287 13:30:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:51.287 13:30:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:51.287 13:30:56 -- scripts/common.sh@367 -- # return 0 00:16:51.287 13:30:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:51.287 13:30:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:51.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.287 --rc genhtml_branch_coverage=1 00:16:51.287 --rc genhtml_function_coverage=1 00:16:51.287 --rc genhtml_legend=1 00:16:51.287 --rc geninfo_all_blocks=1 00:16:51.287 --rc geninfo_unexecuted_blocks=1 00:16:51.287 00:16:51.287 ' 00:16:51.287 13:30:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:51.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.287 --rc genhtml_branch_coverage=1 00:16:51.287 --rc genhtml_function_coverage=1 00:16:51.287 --rc genhtml_legend=1 00:16:51.287 --rc geninfo_all_blocks=1 00:16:51.287 --rc geninfo_unexecuted_blocks=1 00:16:51.287 00:16:51.287 ' 00:16:51.287 13:30:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:51.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.287 --rc genhtml_branch_coverage=1 00:16:51.287 --rc genhtml_function_coverage=1 00:16:51.287 --rc genhtml_legend=1 00:16:51.287 --rc geninfo_all_blocks=1 00:16:51.287 --rc geninfo_unexecuted_blocks=1 00:16:51.287 00:16:51.288 ' 00:16:51.288 13:30:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:51.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.288 --rc genhtml_branch_coverage=1 00:16:51.288 --rc genhtml_function_coverage=1 00:16:51.288 --rc genhtml_legend=1 00:16:51.288 --rc geninfo_all_blocks=1 00:16:51.288 --rc geninfo_unexecuted_blocks=1 00:16:51.288 00:16:51.288 ' 00:16:51.288 13:30:56 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:51.288 13:30:56 -- nvmf/common.sh@7 -- # uname -s 00:16:51.288 13:30:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:51.288 13:30:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:51.288 13:30:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:51.288 13:30:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:51.288 13:30:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:51.288 13:30:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:51.288 13:30:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:51.288 13:30:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:51.288 13:30:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:51.288 13:30:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:51.288 13:30:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:16:51.288 13:30:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:16:51.288 13:30:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:51.288 13:30:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:51.288 13:30:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:51.288 13:30:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:51.288 13:30:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.288 13:30:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.288 13:30:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.288 13:30:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.288 13:30:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.288 13:30:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.288 13:30:56 -- paths/export.sh@5 -- # export PATH 00:16:51.288 13:30:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.288 13:30:56 -- nvmf/common.sh@46 -- # : 0 00:16:51.288 13:30:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:51.288 13:30:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:51.288 13:30:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:51.288 13:30:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:51.288 13:30:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:51.288 13:30:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:51.288 13:30:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:51.288 13:30:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:51.288 13:30:56 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:51.288 13:30:56 -- target/tls.sh@71 -- # nvmftestinit 00:16:51.288 13:30:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:51.288 13:30:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:51.288 13:30:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:51.288 13:30:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:51.288 13:30:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:51.288 13:30:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.288 13:30:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:51.288 13:30:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.547 13:30:56 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:51.547 13:30:56 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:51.547 13:30:56 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:51.547 13:30:56 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:51.547 13:30:56 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:51.547 13:30:56 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:51.547 13:30:56 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:51.547 13:30:56 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:51.547 13:30:56 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:51.547 13:30:56 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:51.547 13:30:56 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:51.547 13:30:56 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:51.547 13:30:56 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:51.547 13:30:56 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:51.547 13:30:56 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:51.547 13:30:56 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:51.547 13:30:56 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:51.547 13:30:56 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:51.547 13:30:56 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:51.547 13:30:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:51.547 Cannot find device "nvmf_tgt_br" 00:16:51.547 13:30:57 -- nvmf/common.sh@154 -- # true 00:16:51.547 13:30:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:51.547 Cannot find device "nvmf_tgt_br2" 00:16:51.547 13:30:57 -- nvmf/common.sh@155 -- # true 00:16:51.547 13:30:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:51.547 13:30:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:51.547 Cannot find device "nvmf_tgt_br" 00:16:51.547 13:30:57 -- nvmf/common.sh@157 -- # true 00:16:51.547 13:30:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:51.547 Cannot find device "nvmf_tgt_br2" 00:16:51.547 13:30:57 -- nvmf/common.sh@158 -- # true 00:16:51.547 13:30:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:51.547 13:30:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:51.547 13:30:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:51.547 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:51.547 13:30:57 -- nvmf/common.sh@161 -- # true 00:16:51.547 13:30:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:51.547 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:51.547 13:30:57 -- nvmf/common.sh@162 -- # true 00:16:51.547 13:30:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:51.547 13:30:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:51.547 13:30:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:51.547 13:30:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:51.547 13:30:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:51.547 13:30:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:51.547 13:30:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:51.547 13:30:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:51.547 13:30:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:51.547 13:30:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:51.547 13:30:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:51.547 13:30:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:51.547 13:30:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:51.547 13:30:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:51.548 13:30:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:51.548 13:30:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:51.548 13:30:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:51.548 13:30:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:51.548 13:30:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:51.548 13:30:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:51.806 13:30:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:51.806 13:30:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:51.806 13:30:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:51.806 13:30:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:51.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:51.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:16:51.806 00:16:51.806 --- 10.0.0.2 ping statistics --- 00:16:51.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.806 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:16:51.806 13:30:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:51.806 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:51.806 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:16:51.806 00:16:51.806 --- 10.0.0.3 ping statistics --- 00:16:51.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.806 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:16:51.806 13:30:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:51.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:51.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:16:51.806 00:16:51.806 --- 10.0.0.1 ping statistics --- 00:16:51.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.806 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:16:51.806 13:30:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:51.807 13:30:57 -- nvmf/common.sh@421 -- # return 0 00:16:51.807 13:30:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:51.807 13:30:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:51.807 13:30:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:51.807 13:30:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:51.807 13:30:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:51.807 13:30:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:51.807 13:30:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:51.807 13:30:57 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:51.807 13:30:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:51.807 13:30:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:51.807 13:30:57 -- common/autotest_common.sh@10 -- # set +x 00:16:51.807 13:30:57 -- nvmf/common.sh@469 -- # nvmfpid=88302 00:16:51.807 13:30:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:51.807 13:30:57 -- nvmf/common.sh@470 -- # waitforlisten 88302 00:16:51.807 13:30:57 -- common/autotest_common.sh@829 -- # '[' -z 88302 ']' 00:16:51.807 13:30:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.807 13:30:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:51.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.807 13:30:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.807 13:30:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:51.807 13:30:57 -- common/autotest_common.sh@10 -- # set +x 00:16:51.807 [2024-12-15 13:30:57.362669] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:51.807 [2024-12-15 13:30:57.363373] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.066 [2024-12-15 13:30:57.509802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.066 [2024-12-15 13:30:57.576730] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:52.066 [2024-12-15 13:30:57.576894] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.066 [2024-12-15 13:30:57.576920] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.066 [2024-12-15 13:30:57.576941] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.066 [2024-12-15 13:30:57.576999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.633 13:30:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:52.633 13:30:58 -- common/autotest_common.sh@862 -- # return 0 00:16:52.633 13:30:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:52.633 13:30:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:52.633 13:30:58 -- common/autotest_common.sh@10 -- # set +x 00:16:52.892 13:30:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.892 13:30:58 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:16:52.892 13:30:58 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:52.892 true 00:16:52.892 13:30:58 -- target/tls.sh@82 -- # jq -r .tls_version 00:16:52.892 13:30:58 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:53.149 13:30:58 -- target/tls.sh@82 -- # version=0 00:16:53.149 13:30:58 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:16:53.149 13:30:58 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:53.715 13:30:59 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:53.715 13:30:59 -- target/tls.sh@90 -- # jq -r .tls_version 00:16:53.715 13:30:59 -- target/tls.sh@90 -- # version=13 00:16:53.715 13:30:59 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:16:53.715 13:30:59 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:53.973 13:30:59 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:53.973 13:30:59 -- target/tls.sh@98 -- # jq -r .tls_version 00:16:54.231 13:30:59 -- target/tls.sh@98 -- # version=7 00:16:54.231 13:30:59 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:16:54.231 13:30:59 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:54.231 13:30:59 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:54.489 13:31:00 -- target/tls.sh@105 -- # ktls=false 00:16:54.489 13:31:00 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:16:54.489 13:31:00 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:54.753 13:31:00 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:54.753 13:31:00 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:55.011 13:31:00 -- target/tls.sh@113 -- # ktls=true 00:16:55.011 13:31:00 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:16:55.011 13:31:00 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:55.270 13:31:00 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:16:55.270 13:31:00 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:55.529 13:31:01 -- target/tls.sh@121 -- # ktls=false 00:16:55.529 13:31:01 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:16:55.529 13:31:01 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:16:55.529 13:31:01 -- target/tls.sh@49 -- # local key hash crc 00:16:55.529 13:31:01 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:16:55.529 13:31:01 -- target/tls.sh@51 -- # hash=01 00:16:55.529 13:31:01 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:16:55.529 13:31:01 -- target/tls.sh@52 -- # gzip -1 -c 00:16:55.529 13:31:01 -- target/tls.sh@52 -- # tail -c8 00:16:55.529 13:31:01 -- target/tls.sh@52 -- # head -c 4 00:16:55.529 13:31:01 -- target/tls.sh@52 -- # crc='p$H�' 00:16:55.529 13:31:01 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:55.529 13:31:01 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:16:55.529 13:31:01 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:55.529 13:31:01 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:55.529 13:31:01 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:16:55.529 13:31:01 -- target/tls.sh@49 -- # local key hash crc 00:16:55.529 13:31:01 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:16:55.529 13:31:01 -- target/tls.sh@51 -- # hash=01 00:16:55.530 13:31:01 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:16:55.530 13:31:01 -- target/tls.sh@52 -- # gzip -1 -c 00:16:55.530 13:31:01 -- target/tls.sh@52 -- # tail -c8 00:16:55.530 13:31:01 -- target/tls.sh@52 -- # head -c 4 00:16:55.530 13:31:01 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:16:55.530 13:31:01 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:55.530 13:31:01 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:16:55.530 13:31:01 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:55.530 13:31:01 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:55.530 13:31:01 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:55.530 13:31:01 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:55.530 13:31:01 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:55.530 13:31:01 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:55.530 13:31:01 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:55.530 13:31:01 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:55.530 13:31:01 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:55.789 13:31:01 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:56.047 13:31:01 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:56.047 13:31:01 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:56.047 13:31:01 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:56.306 [2024-12-15 13:31:01.846828] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:56.306 13:31:01 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:56.565 13:31:02 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:56.824 [2024-12-15 13:31:02.270924] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:56.824 [2024-12-15 13:31:02.271178] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.824 13:31:02 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:56.824 malloc0 00:16:57.085 13:31:02 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:57.357 13:31:02 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:57.357 13:31:03 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:09.582 Initializing NVMe Controllers 00:17:09.582 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:09.582 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:09.582 Initialization complete. Launching workers. 00:17:09.582 ======================================================== 00:17:09.582 Latency(us) 00:17:09.582 Device Information : IOPS MiB/s Average min max 00:17:09.582 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11220.56 43.83 5704.96 926.77 7430.38 00:17:09.582 ======================================================== 00:17:09.582 Total : 11220.56 43.83 5704.96 926.77 7430.38 00:17:09.582 00:17:09.582 13:31:13 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:09.582 13:31:13 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:09.582 13:31:13 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:09.582 13:31:13 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:09.582 13:31:13 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:09.582 13:31:13 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:09.582 13:31:13 -- target/tls.sh@28 -- # bdevperf_pid=88662 00:17:09.582 13:31:13 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:09.582 13:31:13 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:09.582 13:31:13 -- target/tls.sh@31 -- # waitforlisten 88662 /var/tmp/bdevperf.sock 00:17:09.582 13:31:13 -- common/autotest_common.sh@829 -- # '[' -z 88662 ']' 00:17:09.582 13:31:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:09.582 13:31:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:09.582 13:31:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:09.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:09.582 13:31:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:09.582 13:31:13 -- common/autotest_common.sh@10 -- # set +x 00:17:09.582 [2024-12-15 13:31:13.239183] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:09.582 [2024-12-15 13:31:13.239300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88662 ] 00:17:09.582 [2024-12-15 13:31:13.381569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.582 [2024-12-15 13:31:13.451436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:09.582 13:31:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:09.582 13:31:14 -- common/autotest_common.sh@862 -- # return 0 00:17:09.582 13:31:14 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:09.582 [2024-12-15 13:31:14.407256] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:09.582 TLSTESTn1 00:17:09.582 13:31:14 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:09.582 Running I/O for 10 seconds... 00:17:19.563 00:17:19.563 Latency(us) 00:17:19.563 [2024-12-15T13:31:25.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.563 [2024-12-15T13:31:25.253Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:19.563 Verification LBA range: start 0x0 length 0x2000 00:17:19.563 TLSTESTn1 : 10.01 6304.36 24.63 0.00 0.00 20279.32 2100.13 259284.25 00:17:19.563 [2024-12-15T13:31:25.253Z] =================================================================================================================== 00:17:19.563 [2024-12-15T13:31:25.253Z] Total : 6304.36 24.63 0.00 0.00 20279.32 2100.13 259284.25 00:17:19.563 0 00:17:19.563 13:31:24 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:19.563 13:31:24 -- target/tls.sh@45 -- # killprocess 88662 00:17:19.563 13:31:24 -- common/autotest_common.sh@936 -- # '[' -z 88662 ']' 00:17:19.563 13:31:24 -- common/autotest_common.sh@940 -- # kill -0 88662 00:17:19.563 13:31:24 -- common/autotest_common.sh@941 -- # uname 00:17:19.563 13:31:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:19.563 13:31:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88662 00:17:19.563 killing process with pid 88662 00:17:19.563 Received shutdown signal, test time was about 10.000000 seconds 00:17:19.563 00:17:19.563 Latency(us) 00:17:19.563 [2024-12-15T13:31:25.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.563 [2024-12-15T13:31:25.253Z] =================================================================================================================== 00:17:19.563 [2024-12-15T13:31:25.253Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:19.563 13:31:24 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:19.563 13:31:24 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:19.563 13:31:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88662' 00:17:19.563 13:31:24 -- common/autotest_common.sh@955 -- # kill 88662 00:17:19.563 13:31:24 -- common/autotest_common.sh@960 -- # wait 88662 00:17:19.563 13:31:24 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:19.563 13:31:24 -- common/autotest_common.sh@650 -- # local es=0 00:17:19.563 13:31:24 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:19.563 13:31:24 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:19.563 13:31:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:19.563 13:31:24 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:19.563 13:31:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:19.563 13:31:24 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:19.563 13:31:24 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:19.563 13:31:24 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:19.563 13:31:24 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:19.563 13:31:24 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:17:19.563 13:31:24 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:19.563 13:31:24 -- target/tls.sh@28 -- # bdevperf_pid=88821 00:17:19.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:19.563 13:31:24 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:19.563 13:31:24 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:19.563 13:31:24 -- target/tls.sh@31 -- # waitforlisten 88821 /var/tmp/bdevperf.sock 00:17:19.563 13:31:24 -- common/autotest_common.sh@829 -- # '[' -z 88821 ']' 00:17:19.563 13:31:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:19.563 13:31:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:19.563 13:31:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:19.563 13:31:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:19.563 13:31:24 -- common/autotest_common.sh@10 -- # set +x 00:17:19.563 [2024-12-15 13:31:24.905353] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:19.563 [2024-12-15 13:31:24.905455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88821 ] 00:17:19.563 [2024-12-15 13:31:25.043920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.563 [2024-12-15 13:31:25.096635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.499 13:31:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:20.499 13:31:25 -- common/autotest_common.sh@862 -- # return 0 00:17:20.499 13:31:25 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:20.499 [2024-12-15 13:31:26.147078] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:20.499 [2024-12-15 13:31:26.151948] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:20.499 [2024-12-15 13:31:26.152555] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1021cc0 (107): Transport endpoint is not connected 00:17:20.499 [2024-12-15 13:31:26.153544] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1021cc0 (9): Bad file descriptor 00:17:20.499 [2024-12-15 13:31:26.154539] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:20.499 [2024-12-15 13:31:26.154571] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:20.499 [2024-12-15 13:31:26.154580] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:20.499 2024/12/15 13:31:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:20.499 request: 00:17:20.499 { 00:17:20.499 "method": "bdev_nvme_attach_controller", 00:17:20.499 "params": { 00:17:20.499 "name": "TLSTEST", 00:17:20.499 "trtype": "tcp", 00:17:20.499 "traddr": "10.0.0.2", 00:17:20.499 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:20.499 "adrfam": "ipv4", 00:17:20.499 "trsvcid": "4420", 00:17:20.499 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.499 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt" 00:17:20.499 } 00:17:20.499 } 00:17:20.499 Got JSON-RPC error response 00:17:20.499 GoRPCClient: error on JSON-RPC call 00:17:20.499 13:31:26 -- target/tls.sh@36 -- # killprocess 88821 00:17:20.499 13:31:26 -- common/autotest_common.sh@936 -- # '[' -z 88821 ']' 00:17:20.499 13:31:26 -- common/autotest_common.sh@940 -- # kill -0 88821 00:17:20.499 13:31:26 -- common/autotest_common.sh@941 -- # uname 00:17:20.499 13:31:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:20.499 13:31:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88821 00:17:20.759 killing process with pid 88821 00:17:20.759 Received shutdown signal, test time was about 10.000000 seconds 00:17:20.759 00:17:20.759 Latency(us) 00:17:20.759 [2024-12-15T13:31:26.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.759 [2024-12-15T13:31:26.449Z] =================================================================================================================== 00:17:20.759 [2024-12-15T13:31:26.449Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:20.759 13:31:26 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:20.759 13:31:26 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:20.759 13:31:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88821' 00:17:20.759 13:31:26 -- common/autotest_common.sh@955 -- # kill 88821 00:17:20.759 13:31:26 -- common/autotest_common.sh@960 -- # wait 88821 00:17:20.759 13:31:26 -- target/tls.sh@37 -- # return 1 00:17:20.759 13:31:26 -- common/autotest_common.sh@653 -- # es=1 00:17:20.759 13:31:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:20.759 13:31:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:20.759 13:31:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:20.759 13:31:26 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:20.759 13:31:26 -- common/autotest_common.sh@650 -- # local es=0 00:17:20.759 13:31:26 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:20.759 13:31:26 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:20.759 13:31:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.759 13:31:26 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:20.759 13:31:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.759 13:31:26 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:20.759 13:31:26 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:20.759 13:31:26 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:20.759 13:31:26 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:20.759 13:31:26 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:20.759 13:31:26 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:20.760 13:31:26 -- target/tls.sh@28 -- # bdevperf_pid=88862 00:17:20.760 13:31:26 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:20.760 13:31:26 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:20.760 13:31:26 -- target/tls.sh@31 -- # waitforlisten 88862 /var/tmp/bdevperf.sock 00:17:20.760 13:31:26 -- common/autotest_common.sh@829 -- # '[' -z 88862 ']' 00:17:20.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:20.760 13:31:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:20.760 13:31:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.760 13:31:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:20.760 13:31:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.760 13:31:26 -- common/autotest_common.sh@10 -- # set +x 00:17:20.760 [2024-12-15 13:31:26.435424] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:20.760 [2024-12-15 13:31:26.435531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88862 ] 00:17:21.019 [2024-12-15 13:31:26.569471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.019 [2024-12-15 13:31:26.633085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.956 13:31:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:21.956 13:31:27 -- common/autotest_common.sh@862 -- # return 0 00:17:21.956 13:31:27 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:21.956 [2024-12-15 13:31:27.611243] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:21.956 [2024-12-15 13:31:27.621228] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:21.956 [2024-12-15 13:31:27.621275] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:21.956 [2024-12-15 13:31:27.621322] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:21.956 [2024-12-15 13:31:27.621759] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3ecc0 (107): Transport endpoint is not connected 00:17:21.956 [2024-12-15 13:31:27.622752] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3ecc0 (9): Bad file descriptor 00:17:21.956 [2024-12-15 13:31:27.623749] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:21.956 [2024-12-15 13:31:27.623772] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:21.956 [2024-12-15 13:31:27.623782] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:21.957 2024/12/15 13:31:27 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:21.957 request: 00:17:21.957 { 00:17:21.957 "method": "bdev_nvme_attach_controller", 00:17:21.957 "params": { 00:17:21.957 "name": "TLSTEST", 00:17:21.957 "trtype": "tcp", 00:17:21.957 "traddr": "10.0.0.2", 00:17:21.957 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:21.957 "adrfam": "ipv4", 00:17:21.957 "trsvcid": "4420", 00:17:21.957 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.957 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:21.957 } 00:17:21.957 } 00:17:21.957 Got JSON-RPC error response 00:17:21.957 GoRPCClient: error on JSON-RPC call 00:17:22.216 13:31:27 -- target/tls.sh@36 -- # killprocess 88862 00:17:22.216 13:31:27 -- common/autotest_common.sh@936 -- # '[' -z 88862 ']' 00:17:22.216 13:31:27 -- common/autotest_common.sh@940 -- # kill -0 88862 00:17:22.216 13:31:27 -- common/autotest_common.sh@941 -- # uname 00:17:22.216 13:31:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:22.216 13:31:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88862 00:17:22.216 killing process with pid 88862 00:17:22.216 Received shutdown signal, test time was about 10.000000 seconds 00:17:22.216 00:17:22.216 Latency(us) 00:17:22.216 [2024-12-15T13:31:27.906Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.216 [2024-12-15T13:31:27.906Z] =================================================================================================================== 00:17:22.216 [2024-12-15T13:31:27.906Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:22.216 13:31:27 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:22.216 13:31:27 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:22.216 13:31:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88862' 00:17:22.216 13:31:27 -- common/autotest_common.sh@955 -- # kill 88862 00:17:22.216 13:31:27 -- common/autotest_common.sh@960 -- # wait 88862 00:17:22.216 13:31:27 -- target/tls.sh@37 -- # return 1 00:17:22.216 13:31:27 -- common/autotest_common.sh@653 -- # es=1 00:17:22.216 13:31:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:22.216 13:31:27 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:22.216 13:31:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:22.216 13:31:27 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:22.216 13:31:27 -- common/autotest_common.sh@650 -- # local es=0 00:17:22.216 13:31:27 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:22.216 13:31:27 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:22.216 13:31:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:22.216 13:31:27 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:22.216 13:31:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:22.216 13:31:27 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:22.216 13:31:27 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:22.216 13:31:27 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:22.216 13:31:27 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:22.216 13:31:27 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:22.216 13:31:27 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:22.216 13:31:27 -- target/tls.sh@28 -- # bdevperf_pid=88908 00:17:22.216 13:31:27 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:22.216 13:31:27 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:22.216 13:31:27 -- target/tls.sh@31 -- # waitforlisten 88908 /var/tmp/bdevperf.sock 00:17:22.216 13:31:27 -- common/autotest_common.sh@829 -- # '[' -z 88908 ']' 00:17:22.216 13:31:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:22.216 13:31:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:22.216 13:31:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:22.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:22.216 13:31:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:22.216 13:31:27 -- common/autotest_common.sh@10 -- # set +x 00:17:22.475 [2024-12-15 13:31:27.919009] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:22.475 [2024-12-15 13:31:27.919099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88908 ] 00:17:22.475 [2024-12-15 13:31:28.052026] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.475 [2024-12-15 13:31:28.105713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:23.413 13:31:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:23.413 13:31:29 -- common/autotest_common.sh@862 -- # return 0 00:17:23.413 13:31:29 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:23.672 [2024-12-15 13:31:29.232996] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:23.672 [2024-12-15 13:31:29.241346] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:23.672 [2024-12-15 13:31:29.241394] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:23.672 [2024-12-15 13:31:29.241439] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:23.672 [2024-12-15 13:31:29.242421] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5dcc0 (107): Transport endpoint is not connected 00:17:23.672 [2024-12-15 13:31:29.243408] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5dcc0 (9): Bad file descriptor 00:17:23.672 [2024-12-15 13:31:29.244404] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:23.672 [2024-12-15 13:31:29.244439] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:23.672 [2024-12-15 13:31:29.244448] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:23.672 2024/12/15 13:31:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:23.672 request: 00:17:23.672 { 00:17:23.672 "method": "bdev_nvme_attach_controller", 00:17:23.672 "params": { 00:17:23.672 "name": "TLSTEST", 00:17:23.672 "trtype": "tcp", 00:17:23.672 "traddr": "10.0.0.2", 00:17:23.672 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:23.672 "adrfam": "ipv4", 00:17:23.672 "trsvcid": "4420", 00:17:23.672 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:23.672 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:23.672 } 00:17:23.672 } 00:17:23.672 Got JSON-RPC error response 00:17:23.672 GoRPCClient: error on JSON-RPC call 00:17:23.672 13:31:29 -- target/tls.sh@36 -- # killprocess 88908 00:17:23.672 13:31:29 -- common/autotest_common.sh@936 -- # '[' -z 88908 ']' 00:17:23.672 13:31:29 -- common/autotest_common.sh@940 -- # kill -0 88908 00:17:23.672 13:31:29 -- common/autotest_common.sh@941 -- # uname 00:17:23.672 13:31:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:23.672 13:31:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88908 00:17:23.672 killing process with pid 88908 00:17:23.672 Received shutdown signal, test time was about 10.000000 seconds 00:17:23.672 00:17:23.672 Latency(us) 00:17:23.672 [2024-12-15T13:31:29.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.672 [2024-12-15T13:31:29.362Z] =================================================================================================================== 00:17:23.672 [2024-12-15T13:31:29.362Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:23.672 13:31:29 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:23.672 13:31:29 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:23.672 13:31:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88908' 00:17:23.672 13:31:29 -- common/autotest_common.sh@955 -- # kill 88908 00:17:23.672 13:31:29 -- common/autotest_common.sh@960 -- # wait 88908 00:17:23.932 13:31:29 -- target/tls.sh@37 -- # return 1 00:17:23.932 13:31:29 -- common/autotest_common.sh@653 -- # es=1 00:17:23.932 13:31:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:23.932 13:31:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:23.932 13:31:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:23.932 13:31:29 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:23.932 13:31:29 -- common/autotest_common.sh@650 -- # local es=0 00:17:23.932 13:31:29 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:23.932 13:31:29 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:23.932 13:31:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:23.932 13:31:29 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:23.932 13:31:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:23.932 13:31:29 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:23.932 13:31:29 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:23.932 13:31:29 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:23.932 13:31:29 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:23.932 13:31:29 -- target/tls.sh@23 -- # psk= 00:17:23.932 13:31:29 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:23.932 13:31:29 -- target/tls.sh@28 -- # bdevperf_pid=88953 00:17:23.932 13:31:29 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:23.932 13:31:29 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:23.932 13:31:29 -- target/tls.sh@31 -- # waitforlisten 88953 /var/tmp/bdevperf.sock 00:17:23.932 13:31:29 -- common/autotest_common.sh@829 -- # '[' -z 88953 ']' 00:17:23.932 13:31:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:23.932 13:31:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:23.932 13:31:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:23.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:23.932 13:31:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:23.932 13:31:29 -- common/autotest_common.sh@10 -- # set +x 00:17:23.932 [2024-12-15 13:31:29.528328] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:23.932 [2024-12-15 13:31:29.528574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88953 ] 00:17:24.191 [2024-12-15 13:31:29.661009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.191 [2024-12-15 13:31:29.718763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:25.129 13:31:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:25.129 13:31:30 -- common/autotest_common.sh@862 -- # return 0 00:17:25.129 13:31:30 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:25.129 [2024-12-15 13:31:30.741545] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:25.129 [2024-12-15 13:31:30.743365] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eba8c0 (9): Bad file descriptor 00:17:25.129 [2024-12-15 13:31:30.744360] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:25.129 [2024-12-15 13:31:30.744382] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:25.129 [2024-12-15 13:31:30.744391] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:25.129 2024/12/15 13:31:30 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:25.129 request: 00:17:25.129 { 00:17:25.129 "method": "bdev_nvme_attach_controller", 00:17:25.129 "params": { 00:17:25.129 "name": "TLSTEST", 00:17:25.129 "trtype": "tcp", 00:17:25.129 "traddr": "10.0.0.2", 00:17:25.129 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:25.129 "adrfam": "ipv4", 00:17:25.129 "trsvcid": "4420", 00:17:25.129 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:17:25.129 } 00:17:25.129 } 00:17:25.129 Got JSON-RPC error response 00:17:25.129 GoRPCClient: error on JSON-RPC call 00:17:25.129 13:31:30 -- target/tls.sh@36 -- # killprocess 88953 00:17:25.129 13:31:30 -- common/autotest_common.sh@936 -- # '[' -z 88953 ']' 00:17:25.129 13:31:30 -- common/autotest_common.sh@940 -- # kill -0 88953 00:17:25.129 13:31:30 -- common/autotest_common.sh@941 -- # uname 00:17:25.129 13:31:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:25.129 13:31:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88953 00:17:25.129 killing process with pid 88953 00:17:25.129 Received shutdown signal, test time was about 10.000000 seconds 00:17:25.129 00:17:25.129 Latency(us) 00:17:25.129 [2024-12-15T13:31:30.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.129 [2024-12-15T13:31:30.819Z] =================================================================================================================== 00:17:25.129 [2024-12-15T13:31:30.819Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:25.129 13:31:30 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:25.129 13:31:30 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:25.129 13:31:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88953' 00:17:25.129 13:31:30 -- common/autotest_common.sh@955 -- # kill 88953 00:17:25.129 13:31:30 -- common/autotest_common.sh@960 -- # wait 88953 00:17:25.388 13:31:30 -- target/tls.sh@37 -- # return 1 00:17:25.388 13:31:30 -- common/autotest_common.sh@653 -- # es=1 00:17:25.388 13:31:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:25.388 13:31:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:25.388 13:31:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:25.388 13:31:30 -- target/tls.sh@167 -- # killprocess 88302 00:17:25.388 13:31:30 -- common/autotest_common.sh@936 -- # '[' -z 88302 ']' 00:17:25.388 13:31:30 -- common/autotest_common.sh@940 -- # kill -0 88302 00:17:25.388 13:31:30 -- common/autotest_common.sh@941 -- # uname 00:17:25.388 13:31:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:25.388 13:31:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88302 00:17:25.388 killing process with pid 88302 00:17:25.388 13:31:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:25.388 13:31:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:25.388 13:31:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88302' 00:17:25.388 13:31:31 -- common/autotest_common.sh@955 -- # kill 88302 00:17:25.388 13:31:31 -- common/autotest_common.sh@960 -- # wait 88302 00:17:25.647 13:31:31 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:17:25.647 13:31:31 -- target/tls.sh@49 -- # local key hash crc 00:17:25.647 13:31:31 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:25.647 13:31:31 -- target/tls.sh@51 -- # hash=02 00:17:25.647 13:31:31 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:17:25.647 13:31:31 -- target/tls.sh@52 -- # gzip -1 -c 00:17:25.647 13:31:31 -- target/tls.sh@52 -- # tail -c8 00:17:25.647 13:31:31 -- target/tls.sh@52 -- # head -c 4 00:17:25.647 13:31:31 -- target/tls.sh@52 -- # crc='�e�'\''' 00:17:25.648 13:31:31 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:25.648 13:31:31 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:17:25.648 13:31:31 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:25.648 13:31:31 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:25.648 13:31:31 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:25.648 13:31:31 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:25.648 13:31:31 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:25.648 13:31:31 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:17:25.648 13:31:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:25.648 13:31:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:25.648 13:31:31 -- common/autotest_common.sh@10 -- # set +x 00:17:25.648 13:31:31 -- nvmf/common.sh@469 -- # nvmfpid=89014 00:17:25.648 13:31:31 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:25.648 13:31:31 -- nvmf/common.sh@470 -- # waitforlisten 89014 00:17:25.648 13:31:31 -- common/autotest_common.sh@829 -- # '[' -z 89014 ']' 00:17:25.648 13:31:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.648 13:31:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:25.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.648 13:31:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.648 13:31:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:25.648 13:31:31 -- common/autotest_common.sh@10 -- # set +x 00:17:25.648 [2024-12-15 13:31:31.293675] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:25.648 [2024-12-15 13:31:31.293768] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.906 [2024-12-15 13:31:31.433419] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.907 [2024-12-15 13:31:31.486697] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:25.907 [2024-12-15 13:31:31.486868] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:25.907 [2024-12-15 13:31:31.486880] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:25.907 [2024-12-15 13:31:31.486889] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:25.907 [2024-12-15 13:31:31.486917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.843 13:31:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:26.843 13:31:32 -- common/autotest_common.sh@862 -- # return 0 00:17:26.843 13:31:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:26.843 13:31:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:26.843 13:31:32 -- common/autotest_common.sh@10 -- # set +x 00:17:26.843 13:31:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:26.843 13:31:32 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:26.843 13:31:32 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:26.843 13:31:32 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:27.102 [2024-12-15 13:31:32.603964] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:27.102 13:31:32 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:27.361 13:31:32 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:27.621 [2024-12-15 13:31:33.060087] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:27.621 [2024-12-15 13:31:33.060306] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:27.621 13:31:33 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:27.621 malloc0 00:17:27.621 13:31:33 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:27.893 13:31:33 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:28.166 13:31:33 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:28.166 13:31:33 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:28.166 13:31:33 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:28.166 13:31:33 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:28.166 13:31:33 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:28.166 13:31:33 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:28.166 13:31:33 -- target/tls.sh@28 -- # bdevperf_pid=89116 00:17:28.166 13:31:33 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:28.166 13:31:33 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:28.166 13:31:33 -- target/tls.sh@31 -- # waitforlisten 89116 /var/tmp/bdevperf.sock 00:17:28.166 13:31:33 -- common/autotest_common.sh@829 -- # '[' -z 89116 ']' 00:17:28.166 13:31:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:28.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:28.166 13:31:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:28.166 13:31:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:28.166 13:31:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:28.166 13:31:33 -- common/autotest_common.sh@10 -- # set +x 00:17:28.166 [2024-12-15 13:31:33.731516] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:28.166 [2024-12-15 13:31:33.731639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89116 ] 00:17:28.425 [2024-12-15 13:31:33.872238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.425 [2024-12-15 13:31:33.942384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:29.361 13:31:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:29.361 13:31:34 -- common/autotest_common.sh@862 -- # return 0 00:17:29.361 13:31:34 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:29.362 [2024-12-15 13:31:34.990417] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:29.628 TLSTESTn1 00:17:29.629 13:31:35 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:29.629 Running I/O for 10 seconds... 00:17:39.606 00:17:39.607 Latency(us) 00:17:39.607 [2024-12-15T13:31:45.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.607 [2024-12-15T13:31:45.297Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:39.607 Verification LBA range: start 0x0 length 0x2000 00:17:39.607 TLSTESTn1 : 10.01 6554.69 25.60 0.00 0.00 19498.93 4974.78 19899.11 00:17:39.607 [2024-12-15T13:31:45.297Z] =================================================================================================================== 00:17:39.607 [2024-12-15T13:31:45.297Z] Total : 6554.69 25.60 0.00 0.00 19498.93 4974.78 19899.11 00:17:39.607 0 00:17:39.607 13:31:45 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:39.607 13:31:45 -- target/tls.sh@45 -- # killprocess 89116 00:17:39.607 13:31:45 -- common/autotest_common.sh@936 -- # '[' -z 89116 ']' 00:17:39.607 13:31:45 -- common/autotest_common.sh@940 -- # kill -0 89116 00:17:39.607 13:31:45 -- common/autotest_common.sh@941 -- # uname 00:17:39.607 13:31:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:39.607 13:31:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89116 00:17:39.607 killing process with pid 89116 00:17:39.607 Received shutdown signal, test time was about 10.000000 seconds 00:17:39.607 00:17:39.607 Latency(us) 00:17:39.607 [2024-12-15T13:31:45.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.607 [2024-12-15T13:31:45.297Z] =================================================================================================================== 00:17:39.607 [2024-12-15T13:31:45.297Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:39.607 13:31:45 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:39.607 13:31:45 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:39.607 13:31:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89116' 00:17:39.607 13:31:45 -- common/autotest_common.sh@955 -- # kill 89116 00:17:39.607 13:31:45 -- common/autotest_common.sh@960 -- # wait 89116 00:17:39.866 13:31:45 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:39.866 13:31:45 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:39.866 13:31:45 -- common/autotest_common.sh@650 -- # local es=0 00:17:39.866 13:31:45 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:39.866 13:31:45 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:39.866 13:31:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.866 13:31:45 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:39.866 13:31:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.866 13:31:45 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:39.866 13:31:45 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:39.866 13:31:45 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:39.866 13:31:45 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:39.866 13:31:45 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:39.866 13:31:45 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:39.866 13:31:45 -- target/tls.sh@28 -- # bdevperf_pid=89268 00:17:39.866 13:31:45 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:39.866 13:31:45 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:39.866 13:31:45 -- target/tls.sh@31 -- # waitforlisten 89268 /var/tmp/bdevperf.sock 00:17:39.866 13:31:45 -- common/autotest_common.sh@829 -- # '[' -z 89268 ']' 00:17:39.866 13:31:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:39.866 13:31:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:39.866 13:31:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:39.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:39.866 13:31:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:39.866 13:31:45 -- common/autotest_common.sh@10 -- # set +x 00:17:39.866 [2024-12-15 13:31:45.539153] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:39.866 [2024-12-15 13:31:45.539281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89268 ] 00:17:40.126 [2024-12-15 13:31:45.673632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.126 [2024-12-15 13:31:45.728515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.061 13:31:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:41.061 13:31:46 -- common/autotest_common.sh@862 -- # return 0 00:17:41.062 13:31:46 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:41.062 [2024-12-15 13:31:46.660909] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:41.062 [2024-12-15 13:31:46.660973] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:41.062 2024/12/15 13:31:46 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-22 Msg=Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:41.062 request: 00:17:41.062 { 00:17:41.062 "method": "bdev_nvme_attach_controller", 00:17:41.062 "params": { 00:17:41.062 "name": "TLSTEST", 00:17:41.062 "trtype": "tcp", 00:17:41.062 "traddr": "10.0.0.2", 00:17:41.062 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:41.062 "adrfam": "ipv4", 00:17:41.062 "trsvcid": "4420", 00:17:41.062 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:41.062 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:41.062 } 00:17:41.062 } 00:17:41.062 Got JSON-RPC error response 00:17:41.062 GoRPCClient: error on JSON-RPC call 00:17:41.062 13:31:46 -- target/tls.sh@36 -- # killprocess 89268 00:17:41.062 13:31:46 -- common/autotest_common.sh@936 -- # '[' -z 89268 ']' 00:17:41.062 13:31:46 -- common/autotest_common.sh@940 -- # kill -0 89268 00:17:41.062 13:31:46 -- common/autotest_common.sh@941 -- # uname 00:17:41.062 13:31:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:41.062 13:31:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89268 00:17:41.062 killing process with pid 89268 00:17:41.062 Received shutdown signal, test time was about 10.000000 seconds 00:17:41.062 00:17:41.062 Latency(us) 00:17:41.062 [2024-12-15T13:31:46.752Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.062 [2024-12-15T13:31:46.752Z] =================================================================================================================== 00:17:41.062 [2024-12-15T13:31:46.752Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:41.062 13:31:46 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:41.062 13:31:46 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:41.062 13:31:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89268' 00:17:41.062 13:31:46 -- common/autotest_common.sh@955 -- # kill 89268 00:17:41.062 13:31:46 -- common/autotest_common.sh@960 -- # wait 89268 00:17:41.320 13:31:46 -- target/tls.sh@37 -- # return 1 00:17:41.321 13:31:46 -- common/autotest_common.sh@653 -- # es=1 00:17:41.321 13:31:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:41.321 13:31:46 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:41.321 13:31:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:41.321 13:31:46 -- target/tls.sh@183 -- # killprocess 89014 00:17:41.321 13:31:46 -- common/autotest_common.sh@936 -- # '[' -z 89014 ']' 00:17:41.321 13:31:46 -- common/autotest_common.sh@940 -- # kill -0 89014 00:17:41.321 13:31:46 -- common/autotest_common.sh@941 -- # uname 00:17:41.321 13:31:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:41.321 13:31:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89014 00:17:41.321 killing process with pid 89014 00:17:41.321 13:31:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:41.321 13:31:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:41.321 13:31:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89014' 00:17:41.321 13:31:46 -- common/autotest_common.sh@955 -- # kill 89014 00:17:41.321 13:31:46 -- common/autotest_common.sh@960 -- # wait 89014 00:17:41.579 13:31:47 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:41.579 13:31:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:41.579 13:31:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:41.579 13:31:47 -- common/autotest_common.sh@10 -- # set +x 00:17:41.579 13:31:47 -- nvmf/common.sh@469 -- # nvmfpid=89320 00:17:41.579 13:31:47 -- nvmf/common.sh@470 -- # waitforlisten 89320 00:17:41.579 13:31:47 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:41.579 13:31:47 -- common/autotest_common.sh@829 -- # '[' -z 89320 ']' 00:17:41.579 13:31:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.579 13:31:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:41.579 13:31:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.579 13:31:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:41.579 13:31:47 -- common/autotest_common.sh@10 -- # set +x 00:17:41.579 [2024-12-15 13:31:47.205242] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:41.579 [2024-12-15 13:31:47.205339] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.838 [2024-12-15 13:31:47.343544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.838 [2024-12-15 13:31:47.402237] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:41.838 [2024-12-15 13:31:47.402405] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.838 [2024-12-15 13:31:47.402416] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.838 [2024-12-15 13:31:47.402425] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.838 [2024-12-15 13:31:47.402456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.774 13:31:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:42.774 13:31:48 -- common/autotest_common.sh@862 -- # return 0 00:17:42.774 13:31:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:42.774 13:31:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:42.774 13:31:48 -- common/autotest_common.sh@10 -- # set +x 00:17:42.774 13:31:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.774 13:31:48 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:42.774 13:31:48 -- common/autotest_common.sh@650 -- # local es=0 00:17:42.774 13:31:48 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:42.774 13:31:48 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:17:42.774 13:31:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:42.774 13:31:48 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:17:42.775 13:31:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:42.775 13:31:48 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:42.775 13:31:48 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:42.775 13:31:48 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:42.775 [2024-12-15 13:31:48.389068] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:42.775 13:31:48 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:43.033 13:31:48 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:43.292 [2024-12-15 13:31:48.873170] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:43.292 [2024-12-15 13:31:48.873380] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.292 13:31:48 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:43.550 malloc0 00:17:43.550 13:31:49 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:43.809 13:31:49 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:44.068 [2024-12-15 13:31:49.676751] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:44.068 [2024-12-15 13:31:49.676789] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:44.068 [2024-12-15 13:31:49.676808] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:17:44.068 2024/12/15 13:31:49 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:17:44.068 request: 00:17:44.068 { 00:17:44.068 "method": "nvmf_subsystem_add_host", 00:17:44.068 "params": { 00:17:44.068 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:44.068 "host": "nqn.2016-06.io.spdk:host1", 00:17:44.068 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:44.068 } 00:17:44.068 } 00:17:44.068 Got JSON-RPC error response 00:17:44.068 GoRPCClient: error on JSON-RPC call 00:17:44.068 13:31:49 -- common/autotest_common.sh@653 -- # es=1 00:17:44.068 13:31:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:44.068 13:31:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:44.068 13:31:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:44.068 13:31:49 -- target/tls.sh@189 -- # killprocess 89320 00:17:44.068 13:31:49 -- common/autotest_common.sh@936 -- # '[' -z 89320 ']' 00:17:44.068 13:31:49 -- common/autotest_common.sh@940 -- # kill -0 89320 00:17:44.068 13:31:49 -- common/autotest_common.sh@941 -- # uname 00:17:44.068 13:31:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:44.068 13:31:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89320 00:17:44.068 killing process with pid 89320 00:17:44.068 13:31:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:44.068 13:31:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:44.068 13:31:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89320' 00:17:44.068 13:31:49 -- common/autotest_common.sh@955 -- # kill 89320 00:17:44.068 13:31:49 -- common/autotest_common.sh@960 -- # wait 89320 00:17:44.327 13:31:49 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:44.327 13:31:49 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:17:44.327 13:31:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:44.327 13:31:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:44.327 13:31:49 -- common/autotest_common.sh@10 -- # set +x 00:17:44.327 13:31:49 -- nvmf/common.sh@469 -- # nvmfpid=89426 00:17:44.327 13:31:49 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:44.327 13:31:49 -- nvmf/common.sh@470 -- # waitforlisten 89426 00:17:44.327 13:31:49 -- common/autotest_common.sh@829 -- # '[' -z 89426 ']' 00:17:44.327 13:31:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.327 13:31:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:44.327 13:31:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.327 13:31:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:44.327 13:31:49 -- common/autotest_common.sh@10 -- # set +x 00:17:44.327 [2024-12-15 13:31:49.976162] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:44.327 [2024-12-15 13:31:49.976267] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.586 [2024-12-15 13:31:50.105136] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.586 [2024-12-15 13:31:50.177639] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:44.586 [2024-12-15 13:31:50.177786] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:44.586 [2024-12-15 13:31:50.177798] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:44.586 [2024-12-15 13:31:50.177807] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:44.586 [2024-12-15 13:31:50.177829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.521 13:31:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:45.521 13:31:50 -- common/autotest_common.sh@862 -- # return 0 00:17:45.521 13:31:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:45.521 13:31:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:45.521 13:31:50 -- common/autotest_common.sh@10 -- # set +x 00:17:45.521 13:31:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:45.521 13:31:50 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:45.521 13:31:50 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:45.521 13:31:50 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:45.521 [2024-12-15 13:31:51.184736] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:45.521 13:31:51 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:45.780 13:31:51 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:46.038 [2024-12-15 13:31:51.588788] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:46.038 [2024-12-15 13:31:51.589224] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:46.038 13:31:51 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:46.297 malloc0 00:17:46.297 13:31:51 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:46.555 13:31:52 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:46.555 13:31:52 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:46.555 13:31:52 -- target/tls.sh@197 -- # bdevperf_pid=89530 00:17:46.555 13:31:52 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:46.555 13:31:52 -- target/tls.sh@200 -- # waitforlisten 89530 /var/tmp/bdevperf.sock 00:17:46.555 13:31:52 -- common/autotest_common.sh@829 -- # '[' -z 89530 ']' 00:17:46.555 13:31:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:46.555 13:31:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:46.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:46.555 13:31:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:46.555 13:31:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:46.555 13:31:52 -- common/autotest_common.sh@10 -- # set +x 00:17:46.814 [2024-12-15 13:31:52.254811] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:46.814 [2024-12-15 13:31:52.255372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89530 ] 00:17:46.814 [2024-12-15 13:31:52.391495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.814 [2024-12-15 13:31:52.469300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.771 13:31:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.771 13:31:53 -- common/autotest_common.sh@862 -- # return 0 00:17:47.771 13:31:53 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:47.771 [2024-12-15 13:31:53.356680] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:47.771 TLSTESTn1 00:17:48.044 13:31:53 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:48.302 13:31:53 -- target/tls.sh@205 -- # tgtconf='{ 00:17:48.302 "subsystems": [ 00:17:48.302 { 00:17:48.302 "subsystem": "iobuf", 00:17:48.302 "config": [ 00:17:48.302 { 00:17:48.302 "method": "iobuf_set_options", 00:17:48.302 "params": { 00:17:48.302 "large_bufsize": 135168, 00:17:48.302 "large_pool_count": 1024, 00:17:48.302 "small_bufsize": 8192, 00:17:48.302 "small_pool_count": 8192 00:17:48.302 } 00:17:48.302 } 00:17:48.302 ] 00:17:48.302 }, 00:17:48.302 { 00:17:48.302 "subsystem": "sock", 00:17:48.302 "config": [ 00:17:48.302 { 00:17:48.302 "method": "sock_impl_set_options", 00:17:48.302 "params": { 00:17:48.302 "enable_ktls": false, 00:17:48.302 "enable_placement_id": 0, 00:17:48.302 "enable_quickack": false, 00:17:48.302 "enable_recv_pipe": true, 00:17:48.302 "enable_zerocopy_send_client": false, 00:17:48.302 "enable_zerocopy_send_server": true, 00:17:48.302 "impl_name": "posix", 00:17:48.302 "recv_buf_size": 2097152, 00:17:48.302 "send_buf_size": 2097152, 00:17:48.302 "tls_version": 0, 00:17:48.302 "zerocopy_threshold": 0 00:17:48.302 } 00:17:48.302 }, 00:17:48.302 { 00:17:48.302 "method": "sock_impl_set_options", 00:17:48.302 "params": { 00:17:48.302 "enable_ktls": false, 00:17:48.302 "enable_placement_id": 0, 00:17:48.302 "enable_quickack": false, 00:17:48.302 "enable_recv_pipe": true, 00:17:48.302 "enable_zerocopy_send_client": false, 00:17:48.302 "enable_zerocopy_send_server": true, 00:17:48.302 "impl_name": "ssl", 00:17:48.302 "recv_buf_size": 4096, 00:17:48.302 "send_buf_size": 4096, 00:17:48.302 "tls_version": 0, 00:17:48.302 "zerocopy_threshold": 0 00:17:48.302 } 00:17:48.302 } 00:17:48.302 ] 00:17:48.302 }, 00:17:48.302 { 00:17:48.302 "subsystem": "vmd", 00:17:48.302 "config": [] 00:17:48.302 }, 00:17:48.302 { 00:17:48.302 "subsystem": "accel", 00:17:48.302 "config": [ 00:17:48.302 { 00:17:48.302 "method": "accel_set_options", 00:17:48.302 "params": { 00:17:48.302 "buf_count": 2048, 00:17:48.302 "large_cache_size": 16, 00:17:48.302 "sequence_count": 2048, 00:17:48.302 "small_cache_size": 128, 00:17:48.302 "task_count": 2048 00:17:48.302 } 00:17:48.302 } 00:17:48.302 ] 00:17:48.302 }, 00:17:48.302 { 00:17:48.302 "subsystem": "bdev", 00:17:48.302 "config": [ 00:17:48.302 { 00:17:48.302 "method": "bdev_set_options", 00:17:48.302 "params": { 00:17:48.302 "bdev_auto_examine": true, 00:17:48.302 "bdev_io_cache_size": 256, 00:17:48.302 "bdev_io_pool_size": 65535, 00:17:48.302 "iobuf_large_cache_size": 16, 00:17:48.302 "iobuf_small_cache_size": 128 00:17:48.302 } 00:17:48.302 }, 00:17:48.302 { 00:17:48.302 "method": "bdev_raid_set_options", 00:17:48.302 "params": { 00:17:48.302 "process_window_size_kb": 1024 00:17:48.302 } 00:17:48.302 }, 00:17:48.302 { 00:17:48.302 "method": "bdev_iscsi_set_options", 00:17:48.302 "params": { 00:17:48.302 "timeout_sec": 30 00:17:48.302 } 00:17:48.302 }, 00:17:48.302 { 00:17:48.302 "method": "bdev_nvme_set_options", 00:17:48.302 "params": { 00:17:48.302 "action_on_timeout": "none", 00:17:48.302 "allow_accel_sequence": false, 00:17:48.302 "arbitration_burst": 0, 00:17:48.302 "bdev_retry_count": 3, 00:17:48.302 "ctrlr_loss_timeout_sec": 0, 00:17:48.302 "delay_cmd_submit": true, 00:17:48.302 "fast_io_fail_timeout_sec": 0, 00:17:48.302 "generate_uuids": false, 00:17:48.302 "high_priority_weight": 0, 00:17:48.302 "io_path_stat": false, 00:17:48.302 "io_queue_requests": 0, 00:17:48.302 "keep_alive_timeout_ms": 10000, 00:17:48.302 "low_priority_weight": 0, 00:17:48.302 "medium_priority_weight": 0, 00:17:48.302 "nvme_adminq_poll_period_us": 10000, 00:17:48.302 "nvme_ioq_poll_period_us": 0, 00:17:48.302 "reconnect_delay_sec": 0, 00:17:48.302 "timeout_admin_us": 0, 00:17:48.302 "timeout_us": 0, 00:17:48.302 "transport_ack_timeout": 0, 00:17:48.302 "transport_retry_count": 4, 00:17:48.302 "transport_tos": 0 00:17:48.302 } 00:17:48.302 }, 00:17:48.302 { 00:17:48.302 "method": "bdev_nvme_set_hotplug", 00:17:48.302 "params": { 00:17:48.302 "enable": false, 00:17:48.302 "period_us": 100000 00:17:48.302 } 00:17:48.302 }, 00:17:48.302 { 00:17:48.302 "method": "bdev_malloc_create", 00:17:48.302 "params": { 00:17:48.302 "block_size": 4096, 00:17:48.302 "name": "malloc0", 00:17:48.302 "num_blocks": 8192, 00:17:48.303 "optimal_io_boundary": 0, 00:17:48.303 "physical_block_size": 4096, 00:17:48.303 "uuid": "199a9272-51bd-48f2-9ab0-cade20405810" 00:17:48.303 } 00:17:48.303 }, 00:17:48.303 { 00:17:48.303 "method": "bdev_wait_for_examine" 00:17:48.303 } 00:17:48.303 ] 00:17:48.303 }, 00:17:48.303 { 00:17:48.303 "subsystem": "nbd", 00:17:48.303 "config": [] 00:17:48.303 }, 00:17:48.303 { 00:17:48.303 "subsystem": "scheduler", 00:17:48.303 "config": [ 00:17:48.303 { 00:17:48.303 "method": "framework_set_scheduler", 00:17:48.303 "params": { 00:17:48.303 "name": "static" 00:17:48.303 } 00:17:48.303 } 00:17:48.303 ] 00:17:48.303 }, 00:17:48.303 { 00:17:48.303 "subsystem": "nvmf", 00:17:48.303 "config": [ 00:17:48.303 { 00:17:48.303 "method": "nvmf_set_config", 00:17:48.303 "params": { 00:17:48.303 "admin_cmd_passthru": { 00:17:48.303 "identify_ctrlr": false 00:17:48.303 }, 00:17:48.303 "discovery_filter": "match_any" 00:17:48.303 } 00:17:48.303 }, 00:17:48.303 { 00:17:48.303 "method": "nvmf_set_max_subsystems", 00:17:48.303 "params": { 00:17:48.303 "max_subsystems": 1024 00:17:48.303 } 00:17:48.303 }, 00:17:48.303 { 00:17:48.303 "method": "nvmf_set_crdt", 00:17:48.303 "params": { 00:17:48.303 "crdt1": 0, 00:17:48.303 "crdt2": 0, 00:17:48.303 "crdt3": 0 00:17:48.303 } 00:17:48.303 }, 00:17:48.303 { 00:17:48.303 "method": "nvmf_create_transport", 00:17:48.303 "params": { 00:17:48.303 "abort_timeout_sec": 1, 00:17:48.303 "buf_cache_size": 4294967295, 00:17:48.303 "c2h_success": false, 00:17:48.303 "dif_insert_or_strip": false, 00:17:48.303 "in_capsule_data_size": 4096, 00:17:48.303 "io_unit_size": 131072, 00:17:48.303 "max_aq_depth": 128, 00:17:48.303 "max_io_qpairs_per_ctrlr": 127, 00:17:48.303 "max_io_size": 131072, 00:17:48.303 "max_queue_depth": 128, 00:17:48.303 "num_shared_buffers": 511, 00:17:48.303 "sock_priority": 0, 00:17:48.303 "trtype": "TCP", 00:17:48.303 "zcopy": false 00:17:48.303 } 00:17:48.303 }, 00:17:48.303 { 00:17:48.303 "method": "nvmf_create_subsystem", 00:17:48.303 "params": { 00:17:48.303 "allow_any_host": false, 00:17:48.303 "ana_reporting": false, 00:17:48.303 "max_cntlid": 65519, 00:17:48.303 "max_namespaces": 10, 00:17:48.303 "min_cntlid": 1, 00:17:48.303 "model_number": "SPDK bdev Controller", 00:17:48.303 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:48.303 "serial_number": "SPDK00000000000001" 00:17:48.303 } 00:17:48.303 }, 00:17:48.303 { 00:17:48.303 "method": "nvmf_subsystem_add_host", 00:17:48.303 "params": { 00:17:48.303 "host": "nqn.2016-06.io.spdk:host1", 00:17:48.303 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:48.303 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:48.303 } 00:17:48.303 }, 00:17:48.303 { 00:17:48.303 "method": "nvmf_subsystem_add_ns", 00:17:48.303 "params": { 00:17:48.303 "namespace": { 00:17:48.303 "bdev_name": "malloc0", 00:17:48.303 "nguid": "199A927251BD48F29AB0CADE20405810", 00:17:48.303 "nsid": 1, 00:17:48.303 "uuid": "199a9272-51bd-48f2-9ab0-cade20405810" 00:17:48.303 }, 00:17:48.303 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:48.303 } 00:17:48.303 }, 00:17:48.303 { 00:17:48.303 "method": "nvmf_subsystem_add_listener", 00:17:48.303 "params": { 00:17:48.303 "listen_address": { 00:17:48.303 "adrfam": "IPv4", 00:17:48.303 "traddr": "10.0.0.2", 00:17:48.303 "trsvcid": "4420", 00:17:48.303 "trtype": "TCP" 00:17:48.303 }, 00:17:48.303 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:48.303 "secure_channel": true 00:17:48.303 } 00:17:48.303 } 00:17:48.303 ] 00:17:48.303 } 00:17:48.303 ] 00:17:48.303 }' 00:17:48.303 13:31:53 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:48.560 13:31:54 -- target/tls.sh@206 -- # bdevperfconf='{ 00:17:48.561 "subsystems": [ 00:17:48.561 { 00:17:48.561 "subsystem": "iobuf", 00:17:48.561 "config": [ 00:17:48.561 { 00:17:48.561 "method": "iobuf_set_options", 00:17:48.561 "params": { 00:17:48.561 "large_bufsize": 135168, 00:17:48.561 "large_pool_count": 1024, 00:17:48.561 "small_bufsize": 8192, 00:17:48.561 "small_pool_count": 8192 00:17:48.561 } 00:17:48.561 } 00:17:48.561 ] 00:17:48.561 }, 00:17:48.561 { 00:17:48.561 "subsystem": "sock", 00:17:48.561 "config": [ 00:17:48.561 { 00:17:48.561 "method": "sock_impl_set_options", 00:17:48.561 "params": { 00:17:48.561 "enable_ktls": false, 00:17:48.561 "enable_placement_id": 0, 00:17:48.561 "enable_quickack": false, 00:17:48.561 "enable_recv_pipe": true, 00:17:48.561 "enable_zerocopy_send_client": false, 00:17:48.561 "enable_zerocopy_send_server": true, 00:17:48.561 "impl_name": "posix", 00:17:48.561 "recv_buf_size": 2097152, 00:17:48.561 "send_buf_size": 2097152, 00:17:48.561 "tls_version": 0, 00:17:48.561 "zerocopy_threshold": 0 00:17:48.561 } 00:17:48.561 }, 00:17:48.561 { 00:17:48.561 "method": "sock_impl_set_options", 00:17:48.561 "params": { 00:17:48.561 "enable_ktls": false, 00:17:48.561 "enable_placement_id": 0, 00:17:48.561 "enable_quickack": false, 00:17:48.561 "enable_recv_pipe": true, 00:17:48.561 "enable_zerocopy_send_client": false, 00:17:48.561 "enable_zerocopy_send_server": true, 00:17:48.561 "impl_name": "ssl", 00:17:48.561 "recv_buf_size": 4096, 00:17:48.561 "send_buf_size": 4096, 00:17:48.561 "tls_version": 0, 00:17:48.561 "zerocopy_threshold": 0 00:17:48.561 } 00:17:48.561 } 00:17:48.561 ] 00:17:48.561 }, 00:17:48.561 { 00:17:48.561 "subsystem": "vmd", 00:17:48.561 "config": [] 00:17:48.561 }, 00:17:48.561 { 00:17:48.561 "subsystem": "accel", 00:17:48.561 "config": [ 00:17:48.561 { 00:17:48.561 "method": "accel_set_options", 00:17:48.561 "params": { 00:17:48.561 "buf_count": 2048, 00:17:48.561 "large_cache_size": 16, 00:17:48.561 "sequence_count": 2048, 00:17:48.561 "small_cache_size": 128, 00:17:48.561 "task_count": 2048 00:17:48.561 } 00:17:48.561 } 00:17:48.561 ] 00:17:48.561 }, 00:17:48.561 { 00:17:48.561 "subsystem": "bdev", 00:17:48.561 "config": [ 00:17:48.561 { 00:17:48.561 "method": "bdev_set_options", 00:17:48.561 "params": { 00:17:48.561 "bdev_auto_examine": true, 00:17:48.561 "bdev_io_cache_size": 256, 00:17:48.561 "bdev_io_pool_size": 65535, 00:17:48.561 "iobuf_large_cache_size": 16, 00:17:48.561 "iobuf_small_cache_size": 128 00:17:48.561 } 00:17:48.561 }, 00:17:48.561 { 00:17:48.561 "method": "bdev_raid_set_options", 00:17:48.561 "params": { 00:17:48.561 "process_window_size_kb": 1024 00:17:48.561 } 00:17:48.561 }, 00:17:48.561 { 00:17:48.561 "method": "bdev_iscsi_set_options", 00:17:48.561 "params": { 00:17:48.561 "timeout_sec": 30 00:17:48.561 } 00:17:48.561 }, 00:17:48.561 { 00:17:48.561 "method": "bdev_nvme_set_options", 00:17:48.561 "params": { 00:17:48.561 "action_on_timeout": "none", 00:17:48.561 "allow_accel_sequence": false, 00:17:48.561 "arbitration_burst": 0, 00:17:48.561 "bdev_retry_count": 3, 00:17:48.561 "ctrlr_loss_timeout_sec": 0, 00:17:48.561 "delay_cmd_submit": true, 00:17:48.561 "fast_io_fail_timeout_sec": 0, 00:17:48.561 "generate_uuids": false, 00:17:48.561 "high_priority_weight": 0, 00:17:48.561 "io_path_stat": false, 00:17:48.561 "io_queue_requests": 512, 00:17:48.561 "keep_alive_timeout_ms": 10000, 00:17:48.561 "low_priority_weight": 0, 00:17:48.561 "medium_priority_weight": 0, 00:17:48.561 "nvme_adminq_poll_period_us": 10000, 00:17:48.561 "nvme_ioq_poll_period_us": 0, 00:17:48.561 "reconnect_delay_sec": 0, 00:17:48.561 "timeout_admin_us": 0, 00:17:48.561 "timeout_us": 0, 00:17:48.561 "transport_ack_timeout": 0, 00:17:48.561 "transport_retry_count": 4, 00:17:48.561 "transport_tos": 0 00:17:48.561 } 00:17:48.561 }, 00:17:48.561 { 00:17:48.561 "method": "bdev_nvme_attach_controller", 00:17:48.561 "params": { 00:17:48.561 "adrfam": "IPv4", 00:17:48.561 "ctrlr_loss_timeout_sec": 0, 00:17:48.561 "ddgst": false, 00:17:48.561 "fast_io_fail_timeout_sec": 0, 00:17:48.561 "hdgst": false, 00:17:48.561 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:48.561 "name": "TLSTEST", 00:17:48.561 "prchk_guard": false, 00:17:48.561 "prchk_reftag": false, 00:17:48.561 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:48.561 "reconnect_delay_sec": 0, 00:17:48.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:48.561 "traddr": "10.0.0.2", 00:17:48.561 "trsvcid": "4420", 00:17:48.561 "trtype": "TCP" 00:17:48.561 } 00:17:48.561 }, 00:17:48.561 { 00:17:48.561 "method": "bdev_nvme_set_hotplug", 00:17:48.561 "params": { 00:17:48.561 "enable": false, 00:17:48.561 "period_us": 100000 00:17:48.561 } 00:17:48.561 }, 00:17:48.561 { 00:17:48.561 "method": "bdev_wait_for_examine" 00:17:48.561 } 00:17:48.561 ] 00:17:48.561 }, 00:17:48.561 { 00:17:48.561 "subsystem": "nbd", 00:17:48.561 "config": [] 00:17:48.561 } 00:17:48.561 ] 00:17:48.561 }' 00:17:48.561 13:31:54 -- target/tls.sh@208 -- # killprocess 89530 00:17:48.561 13:31:54 -- common/autotest_common.sh@936 -- # '[' -z 89530 ']' 00:17:48.561 13:31:54 -- common/autotest_common.sh@940 -- # kill -0 89530 00:17:48.561 13:31:54 -- common/autotest_common.sh@941 -- # uname 00:17:48.561 13:31:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:48.561 13:31:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89530 00:17:48.561 13:31:54 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:48.561 killing process with pid 89530 00:17:48.561 13:31:54 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:48.561 13:31:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89530' 00:17:48.561 Received shutdown signal, test time was about 10.000000 seconds 00:17:48.561 00:17:48.561 Latency(us) 00:17:48.561 [2024-12-15T13:31:54.251Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.561 [2024-12-15T13:31:54.251Z] =================================================================================================================== 00:17:48.561 [2024-12-15T13:31:54.251Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:48.561 13:31:54 -- common/autotest_common.sh@955 -- # kill 89530 00:17:48.561 13:31:54 -- common/autotest_common.sh@960 -- # wait 89530 00:17:48.820 13:31:54 -- target/tls.sh@209 -- # killprocess 89426 00:17:48.820 13:31:54 -- common/autotest_common.sh@936 -- # '[' -z 89426 ']' 00:17:48.820 13:31:54 -- common/autotest_common.sh@940 -- # kill -0 89426 00:17:48.820 13:31:54 -- common/autotest_common.sh@941 -- # uname 00:17:48.820 13:31:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:48.820 13:31:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89426 00:17:48.820 killing process with pid 89426 00:17:48.820 13:31:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:48.820 13:31:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:48.820 13:31:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89426' 00:17:48.820 13:31:54 -- common/autotest_common.sh@955 -- # kill 89426 00:17:48.820 13:31:54 -- common/autotest_common.sh@960 -- # wait 89426 00:17:49.079 13:31:54 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:49.079 13:31:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:49.079 13:31:54 -- target/tls.sh@212 -- # echo '{ 00:17:49.079 "subsystems": [ 00:17:49.079 { 00:17:49.079 "subsystem": "iobuf", 00:17:49.079 "config": [ 00:17:49.079 { 00:17:49.079 "method": "iobuf_set_options", 00:17:49.079 "params": { 00:17:49.079 "large_bufsize": 135168, 00:17:49.079 "large_pool_count": 1024, 00:17:49.079 "small_bufsize": 8192, 00:17:49.079 "small_pool_count": 8192 00:17:49.079 } 00:17:49.079 } 00:17:49.079 ] 00:17:49.079 }, 00:17:49.079 { 00:17:49.079 "subsystem": "sock", 00:17:49.079 "config": [ 00:17:49.079 { 00:17:49.079 "method": "sock_impl_set_options", 00:17:49.079 "params": { 00:17:49.079 "enable_ktls": false, 00:17:49.079 "enable_placement_id": 0, 00:17:49.079 "enable_quickack": false, 00:17:49.079 "enable_recv_pipe": true, 00:17:49.079 "enable_zerocopy_send_client": false, 00:17:49.079 "enable_zerocopy_send_server": true, 00:17:49.079 "impl_name": "posix", 00:17:49.079 "recv_buf_size": 2097152, 00:17:49.079 "send_buf_size": 2097152, 00:17:49.079 "tls_version": 0, 00:17:49.079 "zerocopy_threshold": 0 00:17:49.079 } 00:17:49.079 }, 00:17:49.079 { 00:17:49.079 "method": "sock_impl_set_options", 00:17:49.079 "params": { 00:17:49.079 "enable_ktls": false, 00:17:49.079 "enable_placement_id": 0, 00:17:49.079 "enable_quickack": false, 00:17:49.079 "enable_recv_pipe": true, 00:17:49.079 "enable_zerocopy_send_client": false, 00:17:49.079 "enable_zerocopy_send_server": true, 00:17:49.079 "impl_name": "ssl", 00:17:49.079 "recv_buf_size": 4096, 00:17:49.079 "send_buf_size": 4096, 00:17:49.079 "tls_version": 0, 00:17:49.079 "zerocopy_threshold": 0 00:17:49.079 } 00:17:49.079 } 00:17:49.079 ] 00:17:49.079 }, 00:17:49.079 { 00:17:49.079 "subsystem": "vmd", 00:17:49.079 "config": [] 00:17:49.079 }, 00:17:49.079 { 00:17:49.079 "subsystem": "accel", 00:17:49.079 "config": [ 00:17:49.079 { 00:17:49.079 "method": "accel_set_options", 00:17:49.079 "params": { 00:17:49.079 "buf_count": 2048, 00:17:49.079 "large_cache_size": 16, 00:17:49.079 "sequence_count": 2048, 00:17:49.079 "small_cache_size": 128, 00:17:49.079 "task_count": 2048 00:17:49.079 } 00:17:49.079 } 00:17:49.079 ] 00:17:49.079 }, 00:17:49.079 { 00:17:49.079 "subsystem": "bdev", 00:17:49.079 "config": [ 00:17:49.079 { 00:17:49.079 "method": "bdev_set_options", 00:17:49.079 "params": { 00:17:49.079 "bdev_auto_examine": true, 00:17:49.079 "bdev_io_cache_size": 256, 00:17:49.079 "bdev_io_pool_size": 65535, 00:17:49.079 "iobuf_large_cache_size": 16, 00:17:49.079 "iobuf_small_cache_size": 128 00:17:49.079 } 00:17:49.079 }, 00:17:49.079 { 00:17:49.079 "method": "bdev_raid_set_options", 00:17:49.079 "params": { 00:17:49.079 "process_window_size_kb": 1024 00:17:49.079 } 00:17:49.079 }, 00:17:49.079 { 00:17:49.079 "method": "bdev_iscsi_set_options", 00:17:49.079 "params": { 00:17:49.079 "timeout_sec": 30 00:17:49.079 } 00:17:49.079 }, 00:17:49.079 { 00:17:49.079 "method": "bdev_nvme_set_options", 00:17:49.079 "params": { 00:17:49.079 "action_on_timeout": "none", 00:17:49.079 "allow_accel_sequence": false, 00:17:49.079 "arbitration_burst": 0, 00:17:49.079 "bdev_retry_count": 3, 00:17:49.079 "ctrlr_loss_timeout_sec": 0, 00:17:49.079 "delay_cmd_submit": true, 00:17:49.079 "fast_io_fail_timeout_sec": 0, 00:17:49.079 "generate_uuids": false, 00:17:49.079 "high_priority_weight": 0, 00:17:49.079 "io_path_stat": false, 00:17:49.079 "io_queue_requests": 0, 00:17:49.079 "keep_alive_timeout_ms": 10000, 00:17:49.079 "low_priority_weight": 0, 00:17:49.079 "medium_priority_weight": 0, 00:17:49.079 "nvme_adminq_poll_period_us": 10000, 00:17:49.079 "nvme_ioq_poll_period_us": 0, 00:17:49.079 "reconnect_delay_sec": 0, 00:17:49.079 "timeout_admin_us": 0, 00:17:49.079 "timeout_us": 0, 00:17:49.079 "transport_ack_timeout": 0, 00:17:49.079 "transport_retry_count": 4, 00:17:49.079 "transport_tos": 0 00:17:49.079 } 00:17:49.079 }, 00:17:49.079 { 00:17:49.079 "method": "bdev_nvme_set_hotplug", 00:17:49.079 "params": { 00:17:49.079 "enable": false, 00:17:49.079 "period_us": 100000 00:17:49.079 } 00:17:49.079 }, 00:17:49.079 { 00:17:49.079 "method": "bdev_malloc_create", 00:17:49.079 "params": { 00:17:49.079 "block_size": 4096, 00:17:49.079 "name": "malloc0", 00:17:49.079 "num_blocks": 8192, 00:17:49.079 "optimal_io_boundary": 0, 00:17:49.079 "physical_block_size": 4096, 00:17:49.079 "uuid": "199a9272-51bd-48f2-9ab0-cade20405810" 00:17:49.079 } 00:17:49.079 }, 00:17:49.079 { 00:17:49.079 "method": "bdev_wait_for_examine" 00:17:49.079 } 00:17:49.079 ] 00:17:49.079 }, 00:17:49.079 { 00:17:49.079 "subsystem": "nbd", 00:17:49.079 "config": [] 00:17:49.079 }, 00:17:49.079 { 00:17:49.079 "subsystem": "scheduler", 00:17:49.079 "config": [ 00:17:49.079 { 00:17:49.079 "method": "framework_set_scheduler", 00:17:49.079 "params": { 00:17:49.079 "name": "static" 00:17:49.079 } 00:17:49.079 } 00:17:49.079 ] 00:17:49.079 }, 00:17:49.079 { 00:17:49.079 "subsystem": "nvmf", 00:17:49.079 "config": [ 00:17:49.079 { 00:17:49.079 "method": "nvmf_set_config", 00:17:49.079 "params": { 00:17:49.079 "admin_cmd_passthru": { 00:17:49.079 "identify_ctrlr": false 00:17:49.079 }, 00:17:49.079 "discovery_filter": "match_any" 00:17:49.079 } 00:17:49.079 }, 00:17:49.079 { 00:17:49.079 "method": "nvmf_set_max_subsystems", 00:17:49.079 "params": { 00:17:49.079 "max_subsystems": 1024 00:17:49.079 } 00:17:49.079 }, 00:17:49.079 { 00:17:49.079 "method": "nvmf_set_crdt", 00:17:49.079 "params": { 00:17:49.079 "crdt1": 0, 00:17:49.079 "crdt2": 0, 00:17:49.079 "crdt3": 0 00:17:49.079 } 00:17:49.079 }, 00:17:49.079 { 00:17:49.079 "method": "nvmf_create_transport", 00:17:49.079 "params": { 00:17:49.079 "abort_timeout_sec": 1, 00:17:49.079 "buf_cache_size": 4294967295, 00:17:49.079 "c2h_success": false, 00:17:49.079 "dif_insert_or_strip": false, 00:17:49.079 "in_capsule_data_size": 4096, 00:17:49.079 "io_unit_size": 131072, 00:17:49.079 "max_aq_depth": 128, 00:17:49.079 "max_io_qpairs_per_ctrlr": 127, 00:17:49.079 "max_io_size": 131072, 00:17:49.079 "max_queue_depth": 128, 00:17:49.079 "num_shared_buffers": 511, 00:17:49.079 "sock_priority": 0, 00:17:49.079 "trtype": "TCP", 00:17:49.079 "zcopy": false 00:17:49.079 } 00:17:49.079 }, 00:17:49.079 { 00:17:49.079 "method": "nvmf_create_subsystem", 00:17:49.079 "params": { 00:17:49.079 "allow_any_host": false, 00:17:49.079 "ana_reporting": false, 00:17:49.079 "max_cntlid": 65519, 00:17:49.079 "max_namespaces": 10, 00:17:49.079 "min_cntlid": 1, 00:17:49.079 "model_number": "SPDK bdev Controller", 00:17:49.079 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:49.079 "serial_number": "SPDK00000000000001" 00:17:49.079 } 00:17:49.079 }, 00:17:49.079 { 00:17:49.079 "method": "nvmf_subsystem_add_host", 00:17:49.079 "params": { 00:17:49.079 "host": "nqn.2016-06.io.spdk:host1", 00:17:49.079 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:49.079 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:49.079 } 00:17:49.079 }, 00:17:49.079 { 00:17:49.079 "method": "nvmf_subsystem_add_ns", 00:17:49.079 "params": { 00:17:49.079 "namespace": { 00:17:49.079 "bdev_name": "malloc0", 00:17:49.079 "nguid": "199A927251BD48F29AB0CADE20405810", 00:17:49.079 "nsid": 1, 00:17:49.079 "uuid": "199a9272-51bd-48f2-9ab0-cade20405810" 00:17:49.079 }, 00:17:49.079 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:49.079 } 00:17:49.079 }, 00:17:49.079 { 00:17:49.079 "method": "nvmf_subsystem_add_listener", 00:17:49.079 "params": { 00:17:49.079 "listen_address": { 00:17:49.079 "adrfam": "IPv4", 00:17:49.079 "traddr": "10.0.0.2", 00:17:49.079 "trsvcid": "4420", 00:17:49.079 "trtype": "TCP" 00:17:49.079 }, 00:17:49.079 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:49.079 "secure_channel": true 00:17:49.079 } 00:17:49.079 } 00:17:49.079 ] 00:17:49.079 } 00:17:49.079 ] 00:17:49.079 }' 00:17:49.079 13:31:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:49.079 13:31:54 -- common/autotest_common.sh@10 -- # set +x 00:17:49.079 13:31:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:49.079 13:31:54 -- nvmf/common.sh@469 -- # nvmfpid=89603 00:17:49.079 13:31:54 -- nvmf/common.sh@470 -- # waitforlisten 89603 00:17:49.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.079 13:31:54 -- common/autotest_common.sh@829 -- # '[' -z 89603 ']' 00:17:49.079 13:31:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.079 13:31:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:49.080 13:31:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.080 13:31:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:49.080 13:31:54 -- common/autotest_common.sh@10 -- # set +x 00:17:49.080 [2024-12-15 13:31:54.561747] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:49.080 [2024-12-15 13:31:54.562016] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.080 [2024-12-15 13:31:54.693544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.080 [2024-12-15 13:31:54.767060] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:49.338 [2024-12-15 13:31:54.767363] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:49.338 [2024-12-15 13:31:54.767384] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:49.338 [2024-12-15 13:31:54.767392] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:49.338 [2024-12-15 13:31:54.767424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.338 [2024-12-15 13:31:54.977316] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.338 [2024-12-15 13:31:55.009273] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:49.338 [2024-12-15 13:31:55.009462] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:49.913 13:31:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:49.913 13:31:55 -- common/autotest_common.sh@862 -- # return 0 00:17:49.913 13:31:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:49.913 13:31:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:49.913 13:31:55 -- common/autotest_common.sh@10 -- # set +x 00:17:49.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:49.913 13:31:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.913 13:31:55 -- target/tls.sh@216 -- # bdevperf_pid=89647 00:17:49.913 13:31:55 -- target/tls.sh@217 -- # waitforlisten 89647 /var/tmp/bdevperf.sock 00:17:49.913 13:31:55 -- common/autotest_common.sh@829 -- # '[' -z 89647 ']' 00:17:49.913 13:31:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:49.913 13:31:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:49.913 13:31:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:49.913 13:31:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:49.913 13:31:55 -- common/autotest_common.sh@10 -- # set +x 00:17:49.913 13:31:55 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:49.913 13:31:55 -- target/tls.sh@213 -- # echo '{ 00:17:49.913 "subsystems": [ 00:17:49.913 { 00:17:49.913 "subsystem": "iobuf", 00:17:49.913 "config": [ 00:17:49.913 { 00:17:49.913 "method": "iobuf_set_options", 00:17:49.913 "params": { 00:17:49.913 "large_bufsize": 135168, 00:17:49.913 "large_pool_count": 1024, 00:17:49.913 "small_bufsize": 8192, 00:17:49.913 "small_pool_count": 8192 00:17:49.913 } 00:17:49.913 } 00:17:49.913 ] 00:17:49.913 }, 00:17:49.913 { 00:17:49.913 "subsystem": "sock", 00:17:49.913 "config": [ 00:17:49.913 { 00:17:49.913 "method": "sock_impl_set_options", 00:17:49.913 "params": { 00:17:49.913 "enable_ktls": false, 00:17:49.913 "enable_placement_id": 0, 00:17:49.913 "enable_quickack": false, 00:17:49.913 "enable_recv_pipe": true, 00:17:49.913 "enable_zerocopy_send_client": false, 00:17:49.913 "enable_zerocopy_send_server": true, 00:17:49.913 "impl_name": "posix", 00:17:49.913 "recv_buf_size": 2097152, 00:17:49.913 "send_buf_size": 2097152, 00:17:49.913 "tls_version": 0, 00:17:49.913 "zerocopy_threshold": 0 00:17:49.913 } 00:17:49.913 }, 00:17:49.913 { 00:17:49.913 "method": "sock_impl_set_options", 00:17:49.913 "params": { 00:17:49.913 "enable_ktls": false, 00:17:49.913 "enable_placement_id": 0, 00:17:49.913 "enable_quickack": false, 00:17:49.913 "enable_recv_pipe": true, 00:17:49.913 "enable_zerocopy_send_client": false, 00:17:49.913 "enable_zerocopy_send_server": true, 00:17:49.913 "impl_name": "ssl", 00:17:49.913 "recv_buf_size": 4096, 00:17:49.913 "send_buf_size": 4096, 00:17:49.913 "tls_version": 0, 00:17:49.913 "zerocopy_threshold": 0 00:17:49.913 } 00:17:49.913 } 00:17:49.913 ] 00:17:49.913 }, 00:17:49.913 { 00:17:49.913 "subsystem": "vmd", 00:17:49.913 "config": [] 00:17:49.913 }, 00:17:49.913 { 00:17:49.913 "subsystem": "accel", 00:17:49.913 "config": [ 00:17:49.913 { 00:17:49.913 "method": "accel_set_options", 00:17:49.913 "params": { 00:17:49.913 "buf_count": 2048, 00:17:49.913 "large_cache_size": 16, 00:17:49.913 "sequence_count": 2048, 00:17:49.913 "small_cache_size": 128, 00:17:49.913 "task_count": 2048 00:17:49.913 } 00:17:49.913 } 00:17:49.913 ] 00:17:49.913 }, 00:17:49.913 { 00:17:49.913 "subsystem": "bdev", 00:17:49.913 "config": [ 00:17:49.913 { 00:17:49.913 "method": "bdev_set_options", 00:17:49.913 "params": { 00:17:49.913 "bdev_auto_examine": true, 00:17:49.913 "bdev_io_cache_size": 256, 00:17:49.913 "bdev_io_pool_size": 65535, 00:17:49.913 "iobuf_large_cache_size": 16, 00:17:49.913 "iobuf_small_cache_size": 128 00:17:49.913 } 00:17:49.913 }, 00:17:49.913 { 00:17:49.913 "method": "bdev_raid_set_options", 00:17:49.913 "params": { 00:17:49.913 "process_window_size_kb": 1024 00:17:49.913 } 00:17:49.913 }, 00:17:49.913 { 00:17:49.913 "method": "bdev_iscsi_set_options", 00:17:49.913 "params": { 00:17:49.913 "timeout_sec": 30 00:17:49.913 } 00:17:49.913 }, 00:17:49.913 { 00:17:49.913 "method": "bdev_nvme_set_options", 00:17:49.913 "params": { 00:17:49.913 "action_on_timeout": "none", 00:17:49.913 "allow_accel_sequence": false, 00:17:49.913 "arbitration_burst": 0, 00:17:49.913 "bdev_retry_count": 3, 00:17:49.913 "ctrlr_loss_timeout_sec": 0, 00:17:49.913 "delay_cmd_submit": true, 00:17:49.913 "fast_io_fail_timeout_sec": 0, 00:17:49.913 "generate_uuids": false, 00:17:49.913 "high_priority_weight": 0, 00:17:49.913 "io_path_stat": false, 00:17:49.913 "io_queue_requests": 512, 00:17:49.913 "keep_alive_timeout_ms": 10000, 00:17:49.913 "low_priority_weight": 0, 00:17:49.913 "medium_priority_weight": 0, 00:17:49.913 "nvme_adminq_poll_period_us": 10000, 00:17:49.913 "nvme_ioq_poll_period_us": 0, 00:17:49.913 "reconnect_delay_sec": 0, 00:17:49.913 "timeout_admin_us": 0, 00:17:49.913 "timeout_us": 0, 00:17:49.913 "transport_ack_timeout": 0, 00:17:49.913 "transport_retry_count": 4, 00:17:49.913 "transport_tos": 0 00:17:49.913 } 00:17:49.913 }, 00:17:49.913 { 00:17:49.913 "method": "bdev_nvme_attach_controller", 00:17:49.913 "params": { 00:17:49.913 "adrfam": "IPv4", 00:17:49.913 "ctrlr_loss_timeout_sec": 0, 00:17:49.913 "ddgst": false, 00:17:49.913 "fast_io_fail_timeout_sec": 0, 00:17:49.913 "hdgst": false, 00:17:49.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:49.913 "name": "TLSTEST", 00:17:49.913 "prchk_guard": false, 00:17:49.913 "prchk_reftag": false, 00:17:49.914 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:49.914 "reconnect_delay_sec": 0, 00:17:49.914 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:49.914 "traddr": "10.0.0.2", 00:17:49.914 "trsvcid": "4420", 00:17:49.914 "trtype": "TCP" 00:17:49.914 } 00:17:49.914 }, 00:17:49.914 { 00:17:49.914 "method": "bdev_nvme_set_hotplug", 00:17:49.914 "params": { 00:17:49.914 "enable": false, 00:17:49.914 "period_us": 100000 00:17:49.914 } 00:17:49.914 }, 00:17:49.914 { 00:17:49.914 "method": "bdev_wait_for_examine" 00:17:49.914 } 00:17:49.914 ] 00:17:49.914 }, 00:17:49.914 { 00:17:49.914 "subsystem": "nbd", 00:17:49.914 "config": [] 00:17:49.914 } 00:17:49.914 ] 00:17:49.914 }' 00:17:49.914 [2024-12-15 13:31:55.577698] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:49.914 [2024-12-15 13:31:55.577788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89647 ] 00:17:50.172 [2024-12-15 13:31:55.715933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.172 [2024-12-15 13:31:55.787389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:50.431 [2024-12-15 13:31:55.942673] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:50.999 13:31:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:50.999 13:31:56 -- common/autotest_common.sh@862 -- # return 0 00:17:50.999 13:31:56 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:50.999 Running I/O for 10 seconds... 00:18:00.976 00:18:00.976 Latency(us) 00:18:00.976 [2024-12-15T13:32:06.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.976 [2024-12-15T13:32:06.666Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:00.976 Verification LBA range: start 0x0 length 0x2000 00:18:00.976 TLSTESTn1 : 10.02 6487.30 25.34 0.00 0.00 19698.48 4021.53 18945.86 00:18:00.976 [2024-12-15T13:32:06.666Z] =================================================================================================================== 00:18:00.976 [2024-12-15T13:32:06.666Z] Total : 6487.30 25.34 0.00 0.00 19698.48 4021.53 18945.86 00:18:00.976 0 00:18:00.976 13:32:06 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:00.976 13:32:06 -- target/tls.sh@223 -- # killprocess 89647 00:18:00.976 13:32:06 -- common/autotest_common.sh@936 -- # '[' -z 89647 ']' 00:18:00.976 13:32:06 -- common/autotest_common.sh@940 -- # kill -0 89647 00:18:00.976 13:32:06 -- common/autotest_common.sh@941 -- # uname 00:18:00.976 13:32:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:00.976 13:32:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89647 00:18:01.235 13:32:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:01.235 killing process with pid 89647 00:18:01.236 Received shutdown signal, test time was about 10.000000 seconds 00:18:01.236 00:18:01.236 Latency(us) 00:18:01.236 [2024-12-15T13:32:06.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.236 [2024-12-15T13:32:06.926Z] =================================================================================================================== 00:18:01.236 [2024-12-15T13:32:06.926Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:01.236 13:32:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:01.236 13:32:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89647' 00:18:01.236 13:32:06 -- common/autotest_common.sh@955 -- # kill 89647 00:18:01.236 13:32:06 -- common/autotest_common.sh@960 -- # wait 89647 00:18:01.236 13:32:06 -- target/tls.sh@224 -- # killprocess 89603 00:18:01.236 13:32:06 -- common/autotest_common.sh@936 -- # '[' -z 89603 ']' 00:18:01.236 13:32:06 -- common/autotest_common.sh@940 -- # kill -0 89603 00:18:01.236 13:32:06 -- common/autotest_common.sh@941 -- # uname 00:18:01.236 13:32:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:01.236 13:32:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89603 00:18:01.236 killing process with pid 89603 00:18:01.236 13:32:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:01.236 13:32:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:01.236 13:32:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89603' 00:18:01.236 13:32:06 -- common/autotest_common.sh@955 -- # kill 89603 00:18:01.236 13:32:06 -- common/autotest_common.sh@960 -- # wait 89603 00:18:01.495 13:32:07 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:18:01.495 13:32:07 -- target/tls.sh@227 -- # cleanup 00:18:01.495 13:32:07 -- target/tls.sh@15 -- # process_shm --id 0 00:18:01.495 13:32:07 -- common/autotest_common.sh@806 -- # type=--id 00:18:01.495 13:32:07 -- common/autotest_common.sh@807 -- # id=0 00:18:01.495 13:32:07 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:01.495 13:32:07 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:01.495 13:32:07 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:01.495 13:32:07 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:01.495 13:32:07 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:01.495 13:32:07 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:01.495 nvmf_trace.0 00:18:01.495 13:32:07 -- common/autotest_common.sh@821 -- # return 0 00:18:01.495 13:32:07 -- target/tls.sh@16 -- # killprocess 89647 00:18:01.495 13:32:07 -- common/autotest_common.sh@936 -- # '[' -z 89647 ']' 00:18:01.495 Process with pid 89647 is not found 00:18:01.495 13:32:07 -- common/autotest_common.sh@940 -- # kill -0 89647 00:18:01.495 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (89647) - No such process 00:18:01.495 13:32:07 -- common/autotest_common.sh@963 -- # echo 'Process with pid 89647 is not found' 00:18:01.495 13:32:07 -- target/tls.sh@17 -- # nvmftestfini 00:18:01.495 13:32:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:01.495 13:32:07 -- nvmf/common.sh@116 -- # sync 00:18:01.754 13:32:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:01.754 13:32:07 -- nvmf/common.sh@119 -- # set +e 00:18:01.754 13:32:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:01.754 13:32:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:01.754 rmmod nvme_tcp 00:18:01.754 rmmod nvme_fabrics 00:18:01.754 rmmod nvme_keyring 00:18:01.754 13:32:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:01.754 Process with pid 89603 is not found 00:18:01.754 13:32:07 -- nvmf/common.sh@123 -- # set -e 00:18:01.754 13:32:07 -- nvmf/common.sh@124 -- # return 0 00:18:01.754 13:32:07 -- nvmf/common.sh@477 -- # '[' -n 89603 ']' 00:18:01.754 13:32:07 -- nvmf/common.sh@478 -- # killprocess 89603 00:18:01.754 13:32:07 -- common/autotest_common.sh@936 -- # '[' -z 89603 ']' 00:18:01.754 13:32:07 -- common/autotest_common.sh@940 -- # kill -0 89603 00:18:01.754 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (89603) - No such process 00:18:01.754 13:32:07 -- common/autotest_common.sh@963 -- # echo 'Process with pid 89603 is not found' 00:18:01.754 13:32:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:01.754 13:32:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:01.754 13:32:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:01.754 13:32:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:01.754 13:32:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:01.754 13:32:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.754 13:32:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:01.754 13:32:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.754 13:32:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:01.754 13:32:07 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:01.754 ************************************ 00:18:01.754 END TEST nvmf_tls 00:18:01.754 ************************************ 00:18:01.754 00:18:01.754 real 1m10.528s 00:18:01.754 user 1m48.449s 00:18:01.754 sys 0m24.668s 00:18:01.754 13:32:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:01.754 13:32:07 -- common/autotest_common.sh@10 -- # set +x 00:18:01.754 13:32:07 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:01.754 13:32:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:01.754 13:32:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:01.754 13:32:07 -- common/autotest_common.sh@10 -- # set +x 00:18:01.754 ************************************ 00:18:01.754 START TEST nvmf_fips 00:18:01.754 ************************************ 00:18:01.754 13:32:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:01.754 * Looking for test storage... 00:18:01.754 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:18:01.754 13:32:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:01.754 13:32:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:01.754 13:32:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:02.013 13:32:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:02.013 13:32:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:02.013 13:32:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:02.013 13:32:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:02.013 13:32:07 -- scripts/common.sh@335 -- # IFS=.-: 00:18:02.013 13:32:07 -- scripts/common.sh@335 -- # read -ra ver1 00:18:02.013 13:32:07 -- scripts/common.sh@336 -- # IFS=.-: 00:18:02.013 13:32:07 -- scripts/common.sh@336 -- # read -ra ver2 00:18:02.013 13:32:07 -- scripts/common.sh@337 -- # local 'op=<' 00:18:02.013 13:32:07 -- scripts/common.sh@339 -- # ver1_l=2 00:18:02.013 13:32:07 -- scripts/common.sh@340 -- # ver2_l=1 00:18:02.013 13:32:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:02.013 13:32:07 -- scripts/common.sh@343 -- # case "$op" in 00:18:02.013 13:32:07 -- scripts/common.sh@344 -- # : 1 00:18:02.013 13:32:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:02.013 13:32:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.013 13:32:07 -- scripts/common.sh@364 -- # decimal 1 00:18:02.013 13:32:07 -- scripts/common.sh@352 -- # local d=1 00:18:02.013 13:32:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:02.013 13:32:07 -- scripts/common.sh@354 -- # echo 1 00:18:02.013 13:32:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:02.013 13:32:07 -- scripts/common.sh@365 -- # decimal 2 00:18:02.013 13:32:07 -- scripts/common.sh@352 -- # local d=2 00:18:02.013 13:32:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:02.014 13:32:07 -- scripts/common.sh@354 -- # echo 2 00:18:02.014 13:32:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:02.014 13:32:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:02.014 13:32:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:02.014 13:32:07 -- scripts/common.sh@367 -- # return 0 00:18:02.014 13:32:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:02.014 13:32:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:02.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.014 --rc genhtml_branch_coverage=1 00:18:02.014 --rc genhtml_function_coverage=1 00:18:02.014 --rc genhtml_legend=1 00:18:02.014 --rc geninfo_all_blocks=1 00:18:02.014 --rc geninfo_unexecuted_blocks=1 00:18:02.014 00:18:02.014 ' 00:18:02.014 13:32:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:02.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.014 --rc genhtml_branch_coverage=1 00:18:02.014 --rc genhtml_function_coverage=1 00:18:02.014 --rc genhtml_legend=1 00:18:02.014 --rc geninfo_all_blocks=1 00:18:02.014 --rc geninfo_unexecuted_blocks=1 00:18:02.014 00:18:02.014 ' 00:18:02.014 13:32:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:02.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.014 --rc genhtml_branch_coverage=1 00:18:02.014 --rc genhtml_function_coverage=1 00:18:02.014 --rc genhtml_legend=1 00:18:02.014 --rc geninfo_all_blocks=1 00:18:02.014 --rc geninfo_unexecuted_blocks=1 00:18:02.014 00:18:02.014 ' 00:18:02.014 13:32:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:02.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.014 --rc genhtml_branch_coverage=1 00:18:02.014 --rc genhtml_function_coverage=1 00:18:02.014 --rc genhtml_legend=1 00:18:02.014 --rc geninfo_all_blocks=1 00:18:02.014 --rc geninfo_unexecuted_blocks=1 00:18:02.014 00:18:02.014 ' 00:18:02.014 13:32:07 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:02.014 13:32:07 -- nvmf/common.sh@7 -- # uname -s 00:18:02.014 13:32:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.014 13:32:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.014 13:32:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.014 13:32:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.014 13:32:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.014 13:32:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.014 13:32:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.014 13:32:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.014 13:32:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.014 13:32:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.014 13:32:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:18:02.014 13:32:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:18:02.014 13:32:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.014 13:32:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.014 13:32:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:02.014 13:32:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:02.014 13:32:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.014 13:32:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.014 13:32:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.014 13:32:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.014 13:32:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.014 13:32:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.014 13:32:07 -- paths/export.sh@5 -- # export PATH 00:18:02.014 13:32:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.014 13:32:07 -- nvmf/common.sh@46 -- # : 0 00:18:02.014 13:32:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:02.014 13:32:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:02.014 13:32:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:02.014 13:32:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.014 13:32:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.014 13:32:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:02.014 13:32:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:02.014 13:32:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:02.014 13:32:07 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:02.014 13:32:07 -- fips/fips.sh@89 -- # check_openssl_version 00:18:02.014 13:32:07 -- fips/fips.sh@83 -- # local target=3.0.0 00:18:02.014 13:32:07 -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:02.014 13:32:07 -- fips/fips.sh@85 -- # openssl version 00:18:02.014 13:32:07 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:18:02.014 13:32:07 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:02.014 13:32:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:02.014 13:32:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:02.014 13:32:07 -- scripts/common.sh@335 -- # IFS=.-: 00:18:02.014 13:32:07 -- scripts/common.sh@335 -- # read -ra ver1 00:18:02.014 13:32:07 -- scripts/common.sh@336 -- # IFS=.-: 00:18:02.014 13:32:07 -- scripts/common.sh@336 -- # read -ra ver2 00:18:02.014 13:32:07 -- scripts/common.sh@337 -- # local 'op=>=' 00:18:02.014 13:32:07 -- scripts/common.sh@339 -- # ver1_l=3 00:18:02.014 13:32:07 -- scripts/common.sh@340 -- # ver2_l=3 00:18:02.014 13:32:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:02.014 13:32:07 -- scripts/common.sh@343 -- # case "$op" in 00:18:02.014 13:32:07 -- scripts/common.sh@347 -- # : 1 00:18:02.014 13:32:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:02.014 13:32:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.014 13:32:07 -- scripts/common.sh@364 -- # decimal 3 00:18:02.014 13:32:07 -- scripts/common.sh@352 -- # local d=3 00:18:02.014 13:32:07 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:02.014 13:32:07 -- scripts/common.sh@354 -- # echo 3 00:18:02.014 13:32:07 -- scripts/common.sh@364 -- # ver1[v]=3 00:18:02.014 13:32:07 -- scripts/common.sh@365 -- # decimal 3 00:18:02.014 13:32:07 -- scripts/common.sh@352 -- # local d=3 00:18:02.014 13:32:07 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:02.014 13:32:07 -- scripts/common.sh@354 -- # echo 3 00:18:02.014 13:32:07 -- scripts/common.sh@365 -- # ver2[v]=3 00:18:02.014 13:32:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:02.014 13:32:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:02.014 13:32:07 -- scripts/common.sh@363 -- # (( v++ )) 00:18:02.014 13:32:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.014 13:32:07 -- scripts/common.sh@364 -- # decimal 1 00:18:02.014 13:32:07 -- scripts/common.sh@352 -- # local d=1 00:18:02.014 13:32:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:02.014 13:32:07 -- scripts/common.sh@354 -- # echo 1 00:18:02.014 13:32:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:02.014 13:32:07 -- scripts/common.sh@365 -- # decimal 0 00:18:02.014 13:32:07 -- scripts/common.sh@352 -- # local d=0 00:18:02.014 13:32:07 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:02.014 13:32:07 -- scripts/common.sh@354 -- # echo 0 00:18:02.014 13:32:07 -- scripts/common.sh@365 -- # ver2[v]=0 00:18:02.014 13:32:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:02.014 13:32:07 -- scripts/common.sh@366 -- # return 0 00:18:02.014 13:32:07 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:02.014 13:32:07 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:02.014 13:32:07 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:02.014 13:32:07 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:02.014 13:32:07 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:02.015 13:32:07 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:02.015 13:32:07 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:02.015 13:32:07 -- fips/fips.sh@113 -- # build_openssl_config 00:18:02.015 13:32:07 -- fips/fips.sh@37 -- # cat 00:18:02.015 13:32:07 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:02.015 13:32:07 -- fips/fips.sh@58 -- # cat - 00:18:02.015 13:32:07 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:02.015 13:32:07 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:02.015 13:32:07 -- fips/fips.sh@116 -- # mapfile -t providers 00:18:02.015 13:32:07 -- fips/fips.sh@116 -- # openssl list -providers 00:18:02.015 13:32:07 -- fips/fips.sh@116 -- # grep name 00:18:02.015 13:32:07 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:02.015 13:32:07 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:02.015 13:32:07 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:02.015 13:32:07 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:02.015 13:32:07 -- common/autotest_common.sh@650 -- # local es=0 00:18:02.015 13:32:07 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:02.015 13:32:07 -- fips/fips.sh@127 -- # : 00:18:02.015 13:32:07 -- common/autotest_common.sh@638 -- # local arg=openssl 00:18:02.015 13:32:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.015 13:32:07 -- common/autotest_common.sh@642 -- # type -t openssl 00:18:02.015 13:32:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.015 13:32:07 -- common/autotest_common.sh@644 -- # type -P openssl 00:18:02.015 13:32:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.015 13:32:07 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:18:02.015 13:32:07 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:18:02.015 13:32:07 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:18:02.273 Error setting digest 00:18:02.273 40B214323A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:02.273 40B214323A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:02.273 13:32:07 -- common/autotest_common.sh@653 -- # es=1 00:18:02.273 13:32:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:02.273 13:32:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:02.273 13:32:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:02.273 13:32:07 -- fips/fips.sh@130 -- # nvmftestinit 00:18:02.273 13:32:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:02.273 13:32:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.273 13:32:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:02.273 13:32:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:02.274 13:32:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:02.274 13:32:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.274 13:32:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:02.274 13:32:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.274 13:32:07 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:02.274 13:32:07 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:02.274 13:32:07 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:02.274 13:32:07 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:02.274 13:32:07 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:02.274 13:32:07 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:02.274 13:32:07 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:02.274 13:32:07 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:02.274 13:32:07 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:02.274 13:32:07 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:02.274 13:32:07 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:02.274 13:32:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:02.274 13:32:07 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:02.274 13:32:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:02.274 13:32:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:02.274 13:32:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:02.274 13:32:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:02.274 13:32:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:02.274 13:32:07 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:02.274 13:32:07 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:02.274 Cannot find device "nvmf_tgt_br" 00:18:02.274 13:32:07 -- nvmf/common.sh@154 -- # true 00:18:02.274 13:32:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:02.274 Cannot find device "nvmf_tgt_br2" 00:18:02.274 13:32:07 -- nvmf/common.sh@155 -- # true 00:18:02.274 13:32:07 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:02.274 13:32:07 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:02.274 Cannot find device "nvmf_tgt_br" 00:18:02.274 13:32:07 -- nvmf/common.sh@157 -- # true 00:18:02.274 13:32:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:02.274 Cannot find device "nvmf_tgt_br2" 00:18:02.274 13:32:07 -- nvmf/common.sh@158 -- # true 00:18:02.274 13:32:07 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:02.274 13:32:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:02.274 13:32:07 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:02.274 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:02.274 13:32:07 -- nvmf/common.sh@161 -- # true 00:18:02.274 13:32:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:02.274 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:02.274 13:32:07 -- nvmf/common.sh@162 -- # true 00:18:02.274 13:32:07 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:02.274 13:32:07 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:02.274 13:32:07 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:02.274 13:32:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:02.274 13:32:07 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:02.274 13:32:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:02.274 13:32:07 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:02.274 13:32:07 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:02.274 13:32:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:02.274 13:32:07 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:02.274 13:32:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:02.274 13:32:07 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:02.274 13:32:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:02.274 13:32:07 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:02.274 13:32:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:02.274 13:32:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:02.533 13:32:07 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:02.533 13:32:07 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:02.533 13:32:07 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:02.533 13:32:07 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:02.533 13:32:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:02.533 13:32:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:02.533 13:32:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:02.533 13:32:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:02.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:02.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:18:02.533 00:18:02.533 --- 10.0.0.2 ping statistics --- 00:18:02.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.533 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:18:02.533 13:32:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:02.533 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:02.533 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:18:02.533 00:18:02.533 --- 10.0.0.3 ping statistics --- 00:18:02.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.533 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:18:02.533 13:32:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:02.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:02.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:18:02.533 00:18:02.533 --- 10.0.0.1 ping statistics --- 00:18:02.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.533 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:18:02.533 13:32:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:02.533 13:32:08 -- nvmf/common.sh@421 -- # return 0 00:18:02.533 13:32:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:02.533 13:32:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:02.533 13:32:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:02.533 13:32:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:02.533 13:32:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:02.533 13:32:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:02.533 13:32:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:02.533 13:32:08 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:02.533 13:32:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:02.533 13:32:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:02.533 13:32:08 -- common/autotest_common.sh@10 -- # set +x 00:18:02.533 13:32:08 -- nvmf/common.sh@469 -- # nvmfpid=90008 00:18:02.533 13:32:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:02.533 13:32:08 -- nvmf/common.sh@470 -- # waitforlisten 90008 00:18:02.533 13:32:08 -- common/autotest_common.sh@829 -- # '[' -z 90008 ']' 00:18:02.533 13:32:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.533 13:32:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:02.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.533 13:32:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.533 13:32:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:02.533 13:32:08 -- common/autotest_common.sh@10 -- # set +x 00:18:02.533 [2024-12-15 13:32:08.146716] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:02.533 [2024-12-15 13:32:08.147002] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.792 [2024-12-15 13:32:08.281583] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.792 [2024-12-15 13:32:08.340468] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:02.792 [2024-12-15 13:32:08.340629] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:02.792 [2024-12-15 13:32:08.340643] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:02.792 [2024-12-15 13:32:08.340650] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:02.792 [2024-12-15 13:32:08.340673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.727 13:32:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:03.727 13:32:09 -- common/autotest_common.sh@862 -- # return 0 00:18:03.727 13:32:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:03.727 13:32:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:03.727 13:32:09 -- common/autotest_common.sh@10 -- # set +x 00:18:03.727 13:32:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:03.727 13:32:09 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:03.727 13:32:09 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:03.727 13:32:09 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:03.727 13:32:09 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:03.727 13:32:09 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:03.728 13:32:09 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:03.728 13:32:09 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:03.728 13:32:09 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:03.986 [2024-12-15 13:32:09.421453] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:03.986 [2024-12-15 13:32:09.437430] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:03.986 [2024-12-15 13:32:09.437693] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:03.986 malloc0 00:18:03.986 13:32:09 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:03.986 13:32:09 -- fips/fips.sh@147 -- # bdevperf_pid=90066 00:18:03.986 13:32:09 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:03.986 13:32:09 -- fips/fips.sh@148 -- # waitforlisten 90066 /var/tmp/bdevperf.sock 00:18:03.986 13:32:09 -- common/autotest_common.sh@829 -- # '[' -z 90066 ']' 00:18:03.986 13:32:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:03.986 13:32:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:03.986 13:32:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:03.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:03.986 13:32:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:03.986 13:32:09 -- common/autotest_common.sh@10 -- # set +x 00:18:03.986 [2024-12-15 13:32:09.559096] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:03.986 [2024-12-15 13:32:09.559186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90066 ] 00:18:04.245 [2024-12-15 13:32:09.696491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.245 [2024-12-15 13:32:09.761580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:04.820 13:32:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:04.820 13:32:10 -- common/autotest_common.sh@862 -- # return 0 00:18:04.820 13:32:10 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:05.078 [2024-12-15 13:32:10.686509] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:05.078 TLSTESTn1 00:18:05.336 13:32:10 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:05.336 Running I/O for 10 seconds... 00:18:15.309 00:18:15.309 Latency(us) 00:18:15.309 [2024-12-15T13:32:20.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.309 [2024-12-15T13:32:20.999Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:15.309 Verification LBA range: start 0x0 length 0x2000 00:18:15.309 TLSTESTn1 : 10.02 6406.36 25.02 0.00 0.00 19948.18 4081.11 20256.58 00:18:15.309 [2024-12-15T13:32:20.999Z] =================================================================================================================== 00:18:15.309 [2024-12-15T13:32:20.999Z] Total : 6406.36 25.02 0.00 0.00 19948.18 4081.11 20256.58 00:18:15.309 0 00:18:15.309 13:32:20 -- fips/fips.sh@1 -- # cleanup 00:18:15.309 13:32:20 -- fips/fips.sh@15 -- # process_shm --id 0 00:18:15.309 13:32:20 -- common/autotest_common.sh@806 -- # type=--id 00:18:15.309 13:32:20 -- common/autotest_common.sh@807 -- # id=0 00:18:15.309 13:32:20 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:15.309 13:32:20 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:15.309 13:32:20 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:15.309 13:32:20 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:15.309 13:32:20 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:15.309 13:32:20 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:15.309 nvmf_trace.0 00:18:15.309 13:32:20 -- common/autotest_common.sh@821 -- # return 0 00:18:15.309 13:32:20 -- fips/fips.sh@16 -- # killprocess 90066 00:18:15.309 13:32:20 -- common/autotest_common.sh@936 -- # '[' -z 90066 ']' 00:18:15.309 13:32:20 -- common/autotest_common.sh@940 -- # kill -0 90066 00:18:15.309 13:32:20 -- common/autotest_common.sh@941 -- # uname 00:18:15.309 13:32:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:15.309 13:32:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90066 00:18:15.568 13:32:21 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:15.568 killing process with pid 90066 00:18:15.568 Received shutdown signal, test time was about 10.000000 seconds 00:18:15.568 00:18:15.568 Latency(us) 00:18:15.568 [2024-12-15T13:32:21.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.568 [2024-12-15T13:32:21.258Z] =================================================================================================================== 00:18:15.568 [2024-12-15T13:32:21.258Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:15.568 13:32:21 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:15.568 13:32:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90066' 00:18:15.568 13:32:21 -- common/autotest_common.sh@955 -- # kill 90066 00:18:15.568 13:32:21 -- common/autotest_common.sh@960 -- # wait 90066 00:18:15.568 13:32:21 -- fips/fips.sh@17 -- # nvmftestfini 00:18:15.568 13:32:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:15.568 13:32:21 -- nvmf/common.sh@116 -- # sync 00:18:15.568 13:32:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:15.568 13:32:21 -- nvmf/common.sh@119 -- # set +e 00:18:15.568 13:32:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:15.568 13:32:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:15.568 rmmod nvme_tcp 00:18:15.827 rmmod nvme_fabrics 00:18:15.827 rmmod nvme_keyring 00:18:15.827 13:32:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:15.827 13:32:21 -- nvmf/common.sh@123 -- # set -e 00:18:15.827 13:32:21 -- nvmf/common.sh@124 -- # return 0 00:18:15.827 13:32:21 -- nvmf/common.sh@477 -- # '[' -n 90008 ']' 00:18:15.827 13:32:21 -- nvmf/common.sh@478 -- # killprocess 90008 00:18:15.827 13:32:21 -- common/autotest_common.sh@936 -- # '[' -z 90008 ']' 00:18:15.827 13:32:21 -- common/autotest_common.sh@940 -- # kill -0 90008 00:18:15.827 13:32:21 -- common/autotest_common.sh@941 -- # uname 00:18:15.827 13:32:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:15.827 13:32:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90008 00:18:15.827 13:32:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:15.827 killing process with pid 90008 00:18:15.827 13:32:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:15.827 13:32:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90008' 00:18:15.827 13:32:21 -- common/autotest_common.sh@955 -- # kill 90008 00:18:15.827 13:32:21 -- common/autotest_common.sh@960 -- # wait 90008 00:18:16.085 13:32:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:16.085 13:32:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:16.085 13:32:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:16.085 13:32:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:16.085 13:32:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:16.085 13:32:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.085 13:32:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:16.085 13:32:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.085 13:32:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:16.085 13:32:21 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:16.085 00:18:16.085 real 0m14.225s 00:18:16.085 user 0m18.947s 00:18:16.085 sys 0m5.860s 00:18:16.085 13:32:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:16.085 13:32:21 -- common/autotest_common.sh@10 -- # set +x 00:18:16.085 ************************************ 00:18:16.085 END TEST nvmf_fips 00:18:16.085 ************************************ 00:18:16.085 13:32:21 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:18:16.085 13:32:21 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:16.085 13:32:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:16.085 13:32:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:16.085 13:32:21 -- common/autotest_common.sh@10 -- # set +x 00:18:16.085 ************************************ 00:18:16.085 START TEST nvmf_fuzz 00:18:16.085 ************************************ 00:18:16.085 13:32:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:16.085 * Looking for test storage... 00:18:16.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:16.085 13:32:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:16.085 13:32:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:16.085 13:32:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:16.345 13:32:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:16.345 13:32:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:16.345 13:32:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:16.345 13:32:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:16.345 13:32:21 -- scripts/common.sh@335 -- # IFS=.-: 00:18:16.345 13:32:21 -- scripts/common.sh@335 -- # read -ra ver1 00:18:16.345 13:32:21 -- scripts/common.sh@336 -- # IFS=.-: 00:18:16.345 13:32:21 -- scripts/common.sh@336 -- # read -ra ver2 00:18:16.345 13:32:21 -- scripts/common.sh@337 -- # local 'op=<' 00:18:16.345 13:32:21 -- scripts/common.sh@339 -- # ver1_l=2 00:18:16.345 13:32:21 -- scripts/common.sh@340 -- # ver2_l=1 00:18:16.345 13:32:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:16.345 13:32:21 -- scripts/common.sh@343 -- # case "$op" in 00:18:16.345 13:32:21 -- scripts/common.sh@344 -- # : 1 00:18:16.345 13:32:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:16.345 13:32:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:16.345 13:32:21 -- scripts/common.sh@364 -- # decimal 1 00:18:16.345 13:32:21 -- scripts/common.sh@352 -- # local d=1 00:18:16.345 13:32:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:16.345 13:32:21 -- scripts/common.sh@354 -- # echo 1 00:18:16.345 13:32:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:16.345 13:32:21 -- scripts/common.sh@365 -- # decimal 2 00:18:16.345 13:32:21 -- scripts/common.sh@352 -- # local d=2 00:18:16.345 13:32:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:16.345 13:32:21 -- scripts/common.sh@354 -- # echo 2 00:18:16.345 13:32:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:16.345 13:32:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:16.345 13:32:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:16.345 13:32:21 -- scripts/common.sh@367 -- # return 0 00:18:16.345 13:32:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:16.345 13:32:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:16.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.345 --rc genhtml_branch_coverage=1 00:18:16.345 --rc genhtml_function_coverage=1 00:18:16.345 --rc genhtml_legend=1 00:18:16.345 --rc geninfo_all_blocks=1 00:18:16.345 --rc geninfo_unexecuted_blocks=1 00:18:16.345 00:18:16.345 ' 00:18:16.345 13:32:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:16.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.345 --rc genhtml_branch_coverage=1 00:18:16.345 --rc genhtml_function_coverage=1 00:18:16.345 --rc genhtml_legend=1 00:18:16.345 --rc geninfo_all_blocks=1 00:18:16.345 --rc geninfo_unexecuted_blocks=1 00:18:16.345 00:18:16.345 ' 00:18:16.345 13:32:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:16.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.345 --rc genhtml_branch_coverage=1 00:18:16.345 --rc genhtml_function_coverage=1 00:18:16.345 --rc genhtml_legend=1 00:18:16.345 --rc geninfo_all_blocks=1 00:18:16.345 --rc geninfo_unexecuted_blocks=1 00:18:16.345 00:18:16.345 ' 00:18:16.345 13:32:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:16.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.345 --rc genhtml_branch_coverage=1 00:18:16.345 --rc genhtml_function_coverage=1 00:18:16.345 --rc genhtml_legend=1 00:18:16.345 --rc geninfo_all_blocks=1 00:18:16.345 --rc geninfo_unexecuted_blocks=1 00:18:16.345 00:18:16.345 ' 00:18:16.345 13:32:21 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:16.345 13:32:21 -- nvmf/common.sh@7 -- # uname -s 00:18:16.345 13:32:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:16.345 13:32:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:16.345 13:32:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:16.345 13:32:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:16.345 13:32:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:16.345 13:32:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:16.345 13:32:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:16.345 13:32:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:16.345 13:32:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:16.345 13:32:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:16.345 13:32:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:18:16.345 13:32:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:18:16.345 13:32:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:16.345 13:32:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:16.345 13:32:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:16.345 13:32:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:16.345 13:32:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:16.345 13:32:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:16.345 13:32:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:16.345 13:32:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.345 13:32:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.345 13:32:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.345 13:32:21 -- paths/export.sh@5 -- # export PATH 00:18:16.345 13:32:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.345 13:32:21 -- nvmf/common.sh@46 -- # : 0 00:18:16.345 13:32:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:16.345 13:32:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:16.345 13:32:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:16.345 13:32:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:16.345 13:32:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:16.345 13:32:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:16.345 13:32:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:16.345 13:32:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:16.345 13:32:21 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:18:16.345 13:32:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:16.345 13:32:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:16.345 13:32:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:16.345 13:32:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:16.345 13:32:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:16.345 13:32:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.345 13:32:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:16.345 13:32:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.345 13:32:21 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:16.345 13:32:21 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:16.345 13:32:21 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:16.345 13:32:21 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:16.345 13:32:21 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:16.345 13:32:21 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:16.345 13:32:21 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:16.345 13:32:21 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:16.345 13:32:21 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:16.345 13:32:21 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:16.345 13:32:21 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:16.345 13:32:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:16.345 13:32:21 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:16.345 13:32:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:16.345 13:32:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:16.345 13:32:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:16.345 13:32:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:16.345 13:32:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:16.346 13:32:21 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:16.346 13:32:21 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:16.346 Cannot find device "nvmf_tgt_br" 00:18:16.346 13:32:21 -- nvmf/common.sh@154 -- # true 00:18:16.346 13:32:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:16.346 Cannot find device "nvmf_tgt_br2" 00:18:16.346 13:32:21 -- nvmf/common.sh@155 -- # true 00:18:16.346 13:32:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:16.346 13:32:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:16.346 Cannot find device "nvmf_tgt_br" 00:18:16.346 13:32:21 -- nvmf/common.sh@157 -- # true 00:18:16.346 13:32:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:16.346 Cannot find device "nvmf_tgt_br2" 00:18:16.346 13:32:21 -- nvmf/common.sh@158 -- # true 00:18:16.346 13:32:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:16.346 13:32:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:16.346 13:32:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:16.346 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:16.346 13:32:21 -- nvmf/common.sh@161 -- # true 00:18:16.346 13:32:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:16.346 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:16.346 13:32:21 -- nvmf/common.sh@162 -- # true 00:18:16.346 13:32:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:16.346 13:32:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:16.346 13:32:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:16.346 13:32:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:16.346 13:32:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:16.346 13:32:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:16.605 13:32:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:16.605 13:32:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:16.605 13:32:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:16.605 13:32:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:16.605 13:32:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:16.605 13:32:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:16.605 13:32:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:16.605 13:32:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:16.605 13:32:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:16.605 13:32:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:16.605 13:32:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:16.605 13:32:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:16.605 13:32:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:16.605 13:32:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:16.605 13:32:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:16.605 13:32:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:16.605 13:32:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:16.605 13:32:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:16.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:16.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:18:16.605 00:18:16.605 --- 10.0.0.2 ping statistics --- 00:18:16.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.605 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:18:16.605 13:32:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:16.605 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:16.605 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:18:16.605 00:18:16.605 --- 10.0.0.3 ping statistics --- 00:18:16.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.605 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:18:16.605 13:32:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:16.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:16.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:18:16.605 00:18:16.605 --- 10.0.0.1 ping statistics --- 00:18:16.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.605 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:18:16.605 13:32:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:16.605 13:32:22 -- nvmf/common.sh@421 -- # return 0 00:18:16.605 13:32:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:16.605 13:32:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:16.605 13:32:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:16.605 13:32:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:16.605 13:32:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:16.605 13:32:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:16.605 13:32:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:16.605 13:32:22 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=90411 00:18:16.605 13:32:22 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:16.605 13:32:22 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:16.605 13:32:22 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 90411 00:18:16.605 13:32:22 -- common/autotest_common.sh@829 -- # '[' -z 90411 ']' 00:18:16.605 13:32:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.605 13:32:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:16.605 13:32:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.605 13:32:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:16.605 13:32:22 -- common/autotest_common.sh@10 -- # set +x 00:18:17.981 13:32:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:17.981 13:32:23 -- common/autotest_common.sh@862 -- # return 0 00:18:17.981 13:32:23 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:17.981 13:32:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.981 13:32:23 -- common/autotest_common.sh@10 -- # set +x 00:18:17.981 13:32:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.981 13:32:23 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:18:17.981 13:32:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.981 13:32:23 -- common/autotest_common.sh@10 -- # set +x 00:18:17.981 Malloc0 00:18:17.981 13:32:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.981 13:32:23 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:17.981 13:32:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.981 13:32:23 -- common/autotest_common.sh@10 -- # set +x 00:18:17.981 13:32:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.981 13:32:23 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:17.981 13:32:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.981 13:32:23 -- common/autotest_common.sh@10 -- # set +x 00:18:17.981 13:32:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.981 13:32:23 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:17.981 13:32:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.981 13:32:23 -- common/autotest_common.sh@10 -- # set +x 00:18:17.981 13:32:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.981 13:32:23 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:18:17.981 13:32:23 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:18:17.981 Shutting down the fuzz application 00:18:17.981 13:32:23 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:18:18.550 Shutting down the fuzz application 00:18:18.550 13:32:23 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:18.550 13:32:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.550 13:32:23 -- common/autotest_common.sh@10 -- # set +x 00:18:18.550 13:32:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.550 13:32:23 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:18.550 13:32:23 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:18:18.550 13:32:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:18.550 13:32:23 -- nvmf/common.sh@116 -- # sync 00:18:18.550 13:32:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:18.550 13:32:24 -- nvmf/common.sh@119 -- # set +e 00:18:18.550 13:32:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:18.550 13:32:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:18.550 rmmod nvme_tcp 00:18:18.550 rmmod nvme_fabrics 00:18:18.550 rmmod nvme_keyring 00:18:18.550 13:32:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:18.550 13:32:24 -- nvmf/common.sh@123 -- # set -e 00:18:18.550 13:32:24 -- nvmf/common.sh@124 -- # return 0 00:18:18.550 13:32:24 -- nvmf/common.sh@477 -- # '[' -n 90411 ']' 00:18:18.550 13:32:24 -- nvmf/common.sh@478 -- # killprocess 90411 00:18:18.550 13:32:24 -- common/autotest_common.sh@936 -- # '[' -z 90411 ']' 00:18:18.550 13:32:24 -- common/autotest_common.sh@940 -- # kill -0 90411 00:18:18.550 13:32:24 -- common/autotest_common.sh@941 -- # uname 00:18:18.550 13:32:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:18.550 13:32:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90411 00:18:18.550 killing process with pid 90411 00:18:18.550 13:32:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:18.550 13:32:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:18.550 13:32:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90411' 00:18:18.550 13:32:24 -- common/autotest_common.sh@955 -- # kill 90411 00:18:18.550 13:32:24 -- common/autotest_common.sh@960 -- # wait 90411 00:18:18.809 13:32:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:18.809 13:32:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:18.809 13:32:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:18.809 13:32:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:18.809 13:32:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:18.809 13:32:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.809 13:32:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:18.809 13:32:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.809 13:32:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:18.809 13:32:24 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:18:18.809 00:18:18.809 real 0m2.776s 00:18:18.809 user 0m2.875s 00:18:18.809 sys 0m0.674s 00:18:18.809 13:32:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:18.809 13:32:24 -- common/autotest_common.sh@10 -- # set +x 00:18:18.809 ************************************ 00:18:18.809 END TEST nvmf_fuzz 00:18:18.809 ************************************ 00:18:18.809 13:32:24 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:18.809 13:32:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:18.809 13:32:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:18.809 13:32:24 -- common/autotest_common.sh@10 -- # set +x 00:18:18.809 ************************************ 00:18:18.809 START TEST nvmf_multiconnection 00:18:18.809 ************************************ 00:18:18.809 13:32:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:19.069 * Looking for test storage... 00:18:19.069 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:19.069 13:32:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:19.069 13:32:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:19.069 13:32:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:19.069 13:32:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:19.069 13:32:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:19.069 13:32:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:19.069 13:32:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:19.069 13:32:24 -- scripts/common.sh@335 -- # IFS=.-: 00:18:19.069 13:32:24 -- scripts/common.sh@335 -- # read -ra ver1 00:18:19.069 13:32:24 -- scripts/common.sh@336 -- # IFS=.-: 00:18:19.069 13:32:24 -- scripts/common.sh@336 -- # read -ra ver2 00:18:19.069 13:32:24 -- scripts/common.sh@337 -- # local 'op=<' 00:18:19.069 13:32:24 -- scripts/common.sh@339 -- # ver1_l=2 00:18:19.069 13:32:24 -- scripts/common.sh@340 -- # ver2_l=1 00:18:19.069 13:32:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:19.069 13:32:24 -- scripts/common.sh@343 -- # case "$op" in 00:18:19.069 13:32:24 -- scripts/common.sh@344 -- # : 1 00:18:19.069 13:32:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:19.069 13:32:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:19.069 13:32:24 -- scripts/common.sh@364 -- # decimal 1 00:18:19.069 13:32:24 -- scripts/common.sh@352 -- # local d=1 00:18:19.069 13:32:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:19.069 13:32:24 -- scripts/common.sh@354 -- # echo 1 00:18:19.069 13:32:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:19.069 13:32:24 -- scripts/common.sh@365 -- # decimal 2 00:18:19.069 13:32:24 -- scripts/common.sh@352 -- # local d=2 00:18:19.069 13:32:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:19.069 13:32:24 -- scripts/common.sh@354 -- # echo 2 00:18:19.069 13:32:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:19.069 13:32:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:19.069 13:32:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:19.069 13:32:24 -- scripts/common.sh@367 -- # return 0 00:18:19.069 13:32:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:19.069 13:32:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:19.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.069 --rc genhtml_branch_coverage=1 00:18:19.069 --rc genhtml_function_coverage=1 00:18:19.069 --rc genhtml_legend=1 00:18:19.069 --rc geninfo_all_blocks=1 00:18:19.069 --rc geninfo_unexecuted_blocks=1 00:18:19.069 00:18:19.069 ' 00:18:19.069 13:32:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:19.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.069 --rc genhtml_branch_coverage=1 00:18:19.069 --rc genhtml_function_coverage=1 00:18:19.069 --rc genhtml_legend=1 00:18:19.069 --rc geninfo_all_blocks=1 00:18:19.069 --rc geninfo_unexecuted_blocks=1 00:18:19.069 00:18:19.069 ' 00:18:19.069 13:32:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:19.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.069 --rc genhtml_branch_coverage=1 00:18:19.069 --rc genhtml_function_coverage=1 00:18:19.069 --rc genhtml_legend=1 00:18:19.069 --rc geninfo_all_blocks=1 00:18:19.069 --rc geninfo_unexecuted_blocks=1 00:18:19.069 00:18:19.069 ' 00:18:19.069 13:32:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:19.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.069 --rc genhtml_branch_coverage=1 00:18:19.069 --rc genhtml_function_coverage=1 00:18:19.069 --rc genhtml_legend=1 00:18:19.069 --rc geninfo_all_blocks=1 00:18:19.069 --rc geninfo_unexecuted_blocks=1 00:18:19.069 00:18:19.069 ' 00:18:19.069 13:32:24 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:19.069 13:32:24 -- nvmf/common.sh@7 -- # uname -s 00:18:19.069 13:32:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:19.069 13:32:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:19.069 13:32:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:19.069 13:32:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:19.069 13:32:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:19.069 13:32:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:19.069 13:32:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:19.069 13:32:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:19.069 13:32:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:19.069 13:32:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:19.069 13:32:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:18:19.069 13:32:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:18:19.069 13:32:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:19.069 13:32:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:19.069 13:32:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:19.069 13:32:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:19.069 13:32:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:19.069 13:32:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:19.069 13:32:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:19.069 13:32:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.069 13:32:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.069 13:32:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.069 13:32:24 -- paths/export.sh@5 -- # export PATH 00:18:19.069 13:32:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.069 13:32:24 -- nvmf/common.sh@46 -- # : 0 00:18:19.069 13:32:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:19.069 13:32:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:19.069 13:32:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:19.070 13:32:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:19.070 13:32:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:19.070 13:32:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:19.070 13:32:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:19.070 13:32:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:19.070 13:32:24 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:19.070 13:32:24 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:19.070 13:32:24 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:18:19.070 13:32:24 -- target/multiconnection.sh@16 -- # nvmftestinit 00:18:19.070 13:32:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:19.070 13:32:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:19.070 13:32:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:19.070 13:32:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:19.070 13:32:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:19.070 13:32:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.070 13:32:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:19.070 13:32:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.070 13:32:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:19.070 13:32:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:19.070 13:32:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:19.070 13:32:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:19.070 13:32:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:19.070 13:32:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:19.070 13:32:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:19.070 13:32:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:19.070 13:32:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:19.070 13:32:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:19.070 13:32:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:19.070 13:32:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:19.070 13:32:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:19.070 13:32:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:19.070 13:32:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:19.070 13:32:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:19.070 13:32:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:19.070 13:32:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:19.070 13:32:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:19.070 13:32:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:19.070 Cannot find device "nvmf_tgt_br" 00:18:19.070 13:32:24 -- nvmf/common.sh@154 -- # true 00:18:19.070 13:32:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:19.070 Cannot find device "nvmf_tgt_br2" 00:18:19.070 13:32:24 -- nvmf/common.sh@155 -- # true 00:18:19.070 13:32:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:19.070 13:32:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:19.070 Cannot find device "nvmf_tgt_br" 00:18:19.070 13:32:24 -- nvmf/common.sh@157 -- # true 00:18:19.070 13:32:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:19.070 Cannot find device "nvmf_tgt_br2" 00:18:19.070 13:32:24 -- nvmf/common.sh@158 -- # true 00:18:19.070 13:32:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:19.329 13:32:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:19.329 13:32:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:19.329 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:19.329 13:32:24 -- nvmf/common.sh@161 -- # true 00:18:19.329 13:32:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:19.329 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:19.329 13:32:24 -- nvmf/common.sh@162 -- # true 00:18:19.329 13:32:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:19.329 13:32:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:19.329 13:32:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:19.329 13:32:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:19.329 13:32:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:19.329 13:32:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:19.329 13:32:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:19.329 13:32:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:19.329 13:32:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:19.329 13:32:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:19.329 13:32:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:19.329 13:32:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:19.329 13:32:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:19.329 13:32:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:19.329 13:32:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:19.329 13:32:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:19.329 13:32:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:19.329 13:32:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:19.329 13:32:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:19.329 13:32:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:19.329 13:32:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:19.329 13:32:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:19.329 13:32:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:19.329 13:32:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:19.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:19.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:18:19.329 00:18:19.329 --- 10.0.0.2 ping statistics --- 00:18:19.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.329 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:18:19.329 13:32:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:19.329 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:19.329 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.029 ms 00:18:19.329 00:18:19.329 --- 10.0.0.3 ping statistics --- 00:18:19.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.329 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:18:19.329 13:32:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:19.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:19.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:18:19.329 00:18:19.329 --- 10.0.0.1 ping statistics --- 00:18:19.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.329 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:18:19.329 13:32:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:19.329 13:32:24 -- nvmf/common.sh@421 -- # return 0 00:18:19.329 13:32:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:19.329 13:32:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:19.329 13:32:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:19.329 13:32:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:19.329 13:32:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:19.329 13:32:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:19.329 13:32:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:19.587 13:32:25 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:18:19.587 13:32:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:19.587 13:32:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:19.587 13:32:25 -- common/autotest_common.sh@10 -- # set +x 00:18:19.587 13:32:25 -- nvmf/common.sh@469 -- # nvmfpid=90633 00:18:19.587 13:32:25 -- nvmf/common.sh@470 -- # waitforlisten 90633 00:18:19.587 13:32:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:19.587 13:32:25 -- common/autotest_common.sh@829 -- # '[' -z 90633 ']' 00:18:19.587 13:32:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.587 13:32:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:19.587 13:32:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.587 13:32:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:19.587 13:32:25 -- common/autotest_common.sh@10 -- # set +x 00:18:19.587 [2024-12-15 13:32:25.066998] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:19.587 [2024-12-15 13:32:25.067073] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.587 [2024-12-15 13:32:25.197107] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:19.587 [2024-12-15 13:32:25.253256] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:19.587 [2024-12-15 13:32:25.253403] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.587 [2024-12-15 13:32:25.253415] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.587 [2024-12-15 13:32:25.253423] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.587 [2024-12-15 13:32:25.253580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.587 [2024-12-15 13:32:25.253748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.587 [2024-12-15 13:32:25.253820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.587 [2024-12-15 13:32:25.253821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:20.525 13:32:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:20.525 13:32:26 -- common/autotest_common.sh@862 -- # return 0 00:18:20.525 13:32:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:20.525 13:32:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:20.525 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:20.525 13:32:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.525 13:32:26 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:20.525 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.525 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:20.525 [2024-12-15 13:32:26.130938] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.525 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.525 13:32:26 -- target/multiconnection.sh@21 -- # seq 1 11 00:18:20.525 13:32:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:20.525 13:32:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:20.525 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.525 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:20.525 Malloc1 00:18:20.525 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.525 13:32:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:18:20.525 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.525 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:20.525 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.525 13:32:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:20.525 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.525 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:20.525 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.525 13:32:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:20.525 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.525 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:20.525 [2024-12-15 13:32:26.200745] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.525 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.525 13:32:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:20.525 13:32:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:20.525 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.525 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:20.783 Malloc2 00:18:20.783 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.783 13:32:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:20.783 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.783 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:20.783 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.783 13:32:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:20.783 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.783 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:20.783 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.783 13:32:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:20.783 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.783 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:20.783 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.783 13:32:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:20.783 13:32:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:20.783 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.783 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:20.783 Malloc3 00:18:20.783 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.783 13:32:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:18:20.783 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.783 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:20.783 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.783 13:32:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:20.783 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.783 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:20.783 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.783 13:32:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:18:20.783 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.783 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:20.783 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.783 13:32:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:20.783 13:32:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:20.783 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.783 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:20.783 Malloc4 00:18:20.783 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.783 13:32:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:18:20.783 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.783 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:20.783 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.783 13:32:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:20.783 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.783 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:20.783 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.784 13:32:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:18:20.784 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.784 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:20.784 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.784 13:32:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:20.784 13:32:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:20.784 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.784 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:20.784 Malloc5 00:18:20.784 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.784 13:32:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:18:20.784 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.784 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:20.784 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.784 13:32:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:20.784 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.784 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:20.784 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.784 13:32:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:18:20.784 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.784 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:20.784 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.784 13:32:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:20.784 13:32:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:18:20.784 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.784 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:20.784 Malloc6 00:18:20.784 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.784 13:32:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:18:20.784 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.784 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:20.784 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.784 13:32:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:18:20.784 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.784 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:20.784 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.784 13:32:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:18:20.784 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.784 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:20.784 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.784 13:32:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:20.784 13:32:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:18:20.784 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.784 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:21.042 Malloc7 00:18:21.042 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.042 13:32:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:18:21.042 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.042 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:21.042 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.043 13:32:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:18:21.043 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.043 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:21.043 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.043 13:32:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:18:21.043 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.043 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:21.043 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.043 13:32:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:21.043 13:32:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:18:21.043 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.043 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:21.043 Malloc8 00:18:21.043 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.043 13:32:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:18:21.043 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.043 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:21.043 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.043 13:32:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:18:21.043 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.043 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:21.043 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.043 13:32:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:18:21.043 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.043 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:21.043 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.043 13:32:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:21.043 13:32:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:18:21.043 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.043 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:21.043 Malloc9 00:18:21.043 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.043 13:32:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:18:21.043 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.043 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:21.043 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.043 13:32:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:18:21.043 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.043 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:21.043 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.043 13:32:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:18:21.043 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.043 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:21.043 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.043 13:32:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:21.043 13:32:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:18:21.043 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.043 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:21.043 Malloc10 00:18:21.043 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.043 13:32:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:18:21.043 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.043 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:21.043 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.043 13:32:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:18:21.043 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.043 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:21.043 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.043 13:32:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:18:21.043 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.043 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:21.043 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.043 13:32:26 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:21.043 13:32:26 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:18:21.043 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.043 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:21.043 Malloc11 00:18:21.043 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.043 13:32:26 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:18:21.043 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.043 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:21.043 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.043 13:32:26 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:18:21.043 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.043 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:21.043 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.043 13:32:26 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:18:21.043 13:32:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.043 13:32:26 -- common/autotest_common.sh@10 -- # set +x 00:18:21.043 13:32:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.043 13:32:26 -- target/multiconnection.sh@28 -- # seq 1 11 00:18:21.043 13:32:26 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:21.043 13:32:26 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:21.302 13:32:26 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:18:21.302 13:32:26 -- common/autotest_common.sh@1187 -- # local i=0 00:18:21.302 13:32:26 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:21.302 13:32:26 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:21.302 13:32:26 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:23.234 13:32:28 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:23.234 13:32:28 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:23.234 13:32:28 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:18:23.524 13:32:28 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:23.524 13:32:28 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:23.524 13:32:28 -- common/autotest_common.sh@1197 -- # return 0 00:18:23.524 13:32:28 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:23.525 13:32:28 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:18:23.525 13:32:29 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:23.525 13:32:29 -- common/autotest_common.sh@1187 -- # local i=0 00:18:23.525 13:32:29 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:23.525 13:32:29 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:23.525 13:32:29 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:25.427 13:32:31 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:25.427 13:32:31 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:25.427 13:32:31 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:18:25.427 13:32:31 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:25.427 13:32:31 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:25.427 13:32:31 -- common/autotest_common.sh@1197 -- # return 0 00:18:25.427 13:32:31 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:25.427 13:32:31 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:18:25.685 13:32:31 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:25.685 13:32:31 -- common/autotest_common.sh@1187 -- # local i=0 00:18:25.685 13:32:31 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:25.685 13:32:31 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:25.685 13:32:31 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:28.216 13:32:33 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:28.216 13:32:33 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:28.216 13:32:33 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:18:28.216 13:32:33 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:28.216 13:32:33 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:28.216 13:32:33 -- common/autotest_common.sh@1197 -- # return 0 00:18:28.216 13:32:33 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:28.216 13:32:33 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:18:28.216 13:32:33 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:28.216 13:32:33 -- common/autotest_common.sh@1187 -- # local i=0 00:18:28.216 13:32:33 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:28.216 13:32:33 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:28.216 13:32:33 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:30.118 13:32:35 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:30.118 13:32:35 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:30.118 13:32:35 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:18:30.118 13:32:35 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:30.118 13:32:35 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:30.118 13:32:35 -- common/autotest_common.sh@1197 -- # return 0 00:18:30.118 13:32:35 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:30.118 13:32:35 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:18:30.118 13:32:35 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:30.118 13:32:35 -- common/autotest_common.sh@1187 -- # local i=0 00:18:30.118 13:32:35 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:30.118 13:32:35 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:30.118 13:32:35 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:32.021 13:32:37 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:32.021 13:32:37 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:32.021 13:32:37 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:18:32.021 13:32:37 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:32.021 13:32:37 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:32.021 13:32:37 -- common/autotest_common.sh@1197 -- # return 0 00:18:32.021 13:32:37 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:32.021 13:32:37 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:18:32.279 13:32:37 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:32.279 13:32:37 -- common/autotest_common.sh@1187 -- # local i=0 00:18:32.279 13:32:37 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:32.279 13:32:37 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:32.279 13:32:37 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:34.181 13:32:39 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:34.181 13:32:39 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:34.181 13:32:39 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:18:34.439 13:32:39 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:34.439 13:32:39 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:34.439 13:32:39 -- common/autotest_common.sh@1197 -- # return 0 00:18:34.439 13:32:39 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:34.439 13:32:39 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:18:34.439 13:32:40 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:34.439 13:32:40 -- common/autotest_common.sh@1187 -- # local i=0 00:18:34.439 13:32:40 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:34.439 13:32:40 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:34.439 13:32:40 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:36.972 13:32:42 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:36.972 13:32:42 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:36.972 13:32:42 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:18:36.972 13:32:42 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:36.972 13:32:42 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:36.972 13:32:42 -- common/autotest_common.sh@1197 -- # return 0 00:18:36.972 13:32:42 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:36.972 13:32:42 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:18:36.972 13:32:42 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:36.972 13:32:42 -- common/autotest_common.sh@1187 -- # local i=0 00:18:36.972 13:32:42 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:36.972 13:32:42 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:36.972 13:32:42 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:38.873 13:32:44 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:38.873 13:32:44 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:38.873 13:32:44 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:18:38.873 13:32:44 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:38.873 13:32:44 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:38.873 13:32:44 -- common/autotest_common.sh@1197 -- # return 0 00:18:38.873 13:32:44 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:38.873 13:32:44 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:18:38.873 13:32:44 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:18:38.873 13:32:44 -- common/autotest_common.sh@1187 -- # local i=0 00:18:38.873 13:32:44 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:38.873 13:32:44 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:38.873 13:32:44 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:40.787 13:32:46 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:40.787 13:32:46 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:40.787 13:32:46 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:18:41.045 13:32:46 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:41.045 13:32:46 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:41.045 13:32:46 -- common/autotest_common.sh@1197 -- # return 0 00:18:41.045 13:32:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:41.045 13:32:46 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:18:41.045 13:32:46 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:18:41.045 13:32:46 -- common/autotest_common.sh@1187 -- # local i=0 00:18:41.045 13:32:46 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:41.045 13:32:46 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:41.045 13:32:46 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:43.578 13:32:48 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:43.578 13:32:48 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:43.578 13:32:48 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:18:43.578 13:32:48 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:43.578 13:32:48 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:43.578 13:32:48 -- common/autotest_common.sh@1197 -- # return 0 00:18:43.578 13:32:48 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.578 13:32:48 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:18:43.578 13:32:48 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:18:43.578 13:32:48 -- common/autotest_common.sh@1187 -- # local i=0 00:18:43.578 13:32:48 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:43.578 13:32:48 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:43.578 13:32:48 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:45.481 13:32:50 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:45.481 13:32:50 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:18:45.481 13:32:50 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:45.481 13:32:50 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:45.481 13:32:50 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:45.481 13:32:50 -- common/autotest_common.sh@1197 -- # return 0 00:18:45.481 13:32:50 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:18:45.481 [global] 00:18:45.481 thread=1 00:18:45.481 invalidate=1 00:18:45.481 rw=read 00:18:45.481 time_based=1 00:18:45.481 runtime=10 00:18:45.481 ioengine=libaio 00:18:45.481 direct=1 00:18:45.481 bs=262144 00:18:45.481 iodepth=64 00:18:45.481 norandommap=1 00:18:45.481 numjobs=1 00:18:45.481 00:18:45.481 [job0] 00:18:45.481 filename=/dev/nvme0n1 00:18:45.481 [job1] 00:18:45.481 filename=/dev/nvme10n1 00:18:45.481 [job2] 00:18:45.481 filename=/dev/nvme1n1 00:18:45.481 [job3] 00:18:45.481 filename=/dev/nvme2n1 00:18:45.481 [job4] 00:18:45.481 filename=/dev/nvme3n1 00:18:45.481 [job5] 00:18:45.481 filename=/dev/nvme4n1 00:18:45.481 [job6] 00:18:45.481 filename=/dev/nvme5n1 00:18:45.481 [job7] 00:18:45.481 filename=/dev/nvme6n1 00:18:45.481 [job8] 00:18:45.481 filename=/dev/nvme7n1 00:18:45.481 [job9] 00:18:45.481 filename=/dev/nvme8n1 00:18:45.481 [job10] 00:18:45.481 filename=/dev/nvme9n1 00:18:45.481 Could not set queue depth (nvme0n1) 00:18:45.481 Could not set queue depth (nvme10n1) 00:18:45.481 Could not set queue depth (nvme1n1) 00:18:45.481 Could not set queue depth (nvme2n1) 00:18:45.481 Could not set queue depth (nvme3n1) 00:18:45.481 Could not set queue depth (nvme4n1) 00:18:45.481 Could not set queue depth (nvme5n1) 00:18:45.481 Could not set queue depth (nvme6n1) 00:18:45.481 Could not set queue depth (nvme7n1) 00:18:45.481 Could not set queue depth (nvme8n1) 00:18:45.481 Could not set queue depth (nvme9n1) 00:18:45.739 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:45.739 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:45.739 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:45.740 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:45.740 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:45.740 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:45.740 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:45.740 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:45.740 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:45.740 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:45.740 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:45.740 fio-3.35 00:18:45.740 Starting 11 threads 00:18:57.949 00:18:57.949 job0: (groupid=0, jobs=1): err= 0: pid=91113: Sun Dec 15 13:33:01 2024 00:18:57.949 read: IOPS=563, BW=141MiB/s (148MB/s)(1424MiB/10108msec) 00:18:57.949 slat (usec): min=21, max=92881, avg=1692.55, stdev=6683.85 00:18:57.949 clat (usec): min=1226, max=247062, avg=111617.44, stdev=45085.99 00:18:57.949 lat (usec): min=1305, max=247143, avg=113309.98, stdev=46129.86 00:18:57.949 clat percentiles (msec): 00:18:57.949 | 1.00th=[ 3], 5.00th=[ 34], 10.00th=[ 57], 20.00th=[ 71], 00:18:57.949 | 30.00th=[ 83], 40.00th=[ 96], 50.00th=[ 120], 60.00th=[ 138], 00:18:57.949 | 70.00th=[ 142], 80.00th=[ 148], 90.00th=[ 161], 95.00th=[ 178], 00:18:57.949 | 99.00th=[ 207], 99.50th=[ 220], 99.90th=[ 245], 99.95th=[ 245], 00:18:57.949 | 99.99th=[ 247] 00:18:57.949 bw ( KiB/s): min=87552, max=291768, per=8.34%, avg=144103.90, stdev=57098.77, samples=20 00:18:57.949 iops : min= 342, max= 1139, avg=562.85, stdev=222.94, samples=20 00:18:57.949 lat (msec) : 2=0.19%, 4=1.21%, 10=1.42%, 20=0.79%, 50=3.76% 00:18:57.949 lat (msec) : 100=35.30%, 250=57.32% 00:18:57.949 cpu : usr=0.28%, sys=1.94%, ctx=1235, majf=0, minf=4097 00:18:57.949 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:57.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.949 issued rwts: total=5694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.949 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.949 job1: (groupid=0, jobs=1): err= 0: pid=91114: Sun Dec 15 13:33:01 2024 00:18:57.949 read: IOPS=483, BW=121MiB/s (127MB/s)(1220MiB/10102msec) 00:18:57.949 slat (usec): min=15, max=115375, avg=1975.04, stdev=7092.91 00:18:57.949 clat (msec): min=31, max=262, avg=130.24, stdev=29.99 00:18:57.949 lat (msec): min=32, max=302, avg=132.22, stdev=31.13 00:18:57.949 clat percentiles (msec): 00:18:57.949 | 1.00th=[ 75], 5.00th=[ 95], 10.00th=[ 101], 20.00th=[ 108], 00:18:57.949 | 30.00th=[ 114], 40.00th=[ 120], 50.00th=[ 124], 60.00th=[ 130], 00:18:57.949 | 70.00th=[ 136], 80.00th=[ 150], 90.00th=[ 180], 95.00th=[ 190], 00:18:57.949 | 99.00th=[ 213], 99.50th=[ 218], 99.90th=[ 228], 99.95th=[ 264], 00:18:57.949 | 99.99th=[ 264] 00:18:57.949 bw ( KiB/s): min=71310, max=157381, per=7.14%, avg=123338.55, stdev=21944.76, samples=20 00:18:57.949 iops : min= 278, max= 614, avg=481.55, stdev=85.77, samples=20 00:18:57.949 lat (msec) : 50=0.76%, 100=9.20%, 250=89.98%, 500=0.06% 00:18:57.949 cpu : usr=0.15%, sys=1.90%, ctx=1182, majf=0, minf=4097 00:18:57.949 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:57.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.949 issued rwts: total=4881,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.949 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.949 job2: (groupid=0, jobs=1): err= 0: pid=91115: Sun Dec 15 13:33:01 2024 00:18:57.949 read: IOPS=498, BW=125MiB/s (131MB/s)(1259MiB/10112msec) 00:18:57.949 slat (usec): min=15, max=82989, avg=1909.34, stdev=6596.32 00:18:57.949 clat (msec): min=8, max=236, avg=126.25, stdev=34.92 00:18:57.949 lat (msec): min=8, max=263, avg=128.15, stdev=35.87 00:18:57.949 clat percentiles (msec): 00:18:57.949 | 1.00th=[ 20], 5.00th=[ 62], 10.00th=[ 96], 20.00th=[ 110], 00:18:57.949 | 30.00th=[ 113], 40.00th=[ 117], 50.00th=[ 124], 60.00th=[ 129], 00:18:57.949 | 70.00th=[ 138], 80.00th=[ 150], 90.00th=[ 176], 95.00th=[ 188], 00:18:57.949 | 99.00th=[ 209], 99.50th=[ 213], 99.90th=[ 220], 99.95th=[ 220], 00:18:57.949 | 99.99th=[ 236] 00:18:57.949 bw ( KiB/s): min=86866, max=168960, per=7.37%, avg=127286.15, stdev=23269.06, samples=20 00:18:57.949 iops : min= 339, max= 660, avg=497.15, stdev=90.90, samples=20 00:18:57.949 lat (msec) : 10=0.56%, 20=0.71%, 50=2.34%, 100=8.99%, 250=87.39% 00:18:57.949 cpu : usr=0.23%, sys=1.77%, ctx=1009, majf=0, minf=4097 00:18:57.949 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:18:57.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.949 issued rwts: total=5037,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.949 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.949 job3: (groupid=0, jobs=1): err= 0: pid=91116: Sun Dec 15 13:33:01 2024 00:18:57.949 read: IOPS=844, BW=211MiB/s (221MB/s)(2120MiB/10046msec) 00:18:57.949 slat (usec): min=14, max=62307, avg=1121.65, stdev=4289.34 00:18:57.949 clat (msec): min=34, max=187, avg=74.49, stdev=28.52 00:18:57.949 lat (msec): min=34, max=198, avg=75.61, stdev=29.09 00:18:57.949 clat percentiles (msec): 00:18:57.949 | 1.00th=[ 42], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 55], 00:18:57.949 | 30.00th=[ 58], 40.00th=[ 62], 50.00th=[ 65], 60.00th=[ 68], 00:18:57.949 | 70.00th=[ 73], 80.00th=[ 94], 90.00th=[ 131], 95.00th=[ 140], 00:18:57.949 | 99.00th=[ 153], 99.50th=[ 157], 99.90th=[ 163], 99.95th=[ 169], 00:18:57.949 | 99.99th=[ 188] 00:18:57.950 bw ( KiB/s): min=115712, max=285696, per=12.47%, avg=215437.40, stdev=66644.41, samples=20 00:18:57.950 iops : min= 452, max= 1116, avg=841.45, stdev=260.35, samples=20 00:18:57.950 lat (msec) : 50=9.42%, 100=73.20%, 250=17.38% 00:18:57.950 cpu : usr=0.29%, sys=2.82%, ctx=1701, majf=0, minf=4097 00:18:57.950 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:57.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.950 issued rwts: total=8481,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.950 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.950 job4: (groupid=0, jobs=1): err= 0: pid=91117: Sun Dec 15 13:33:01 2024 00:18:57.950 read: IOPS=510, BW=128MiB/s (134MB/s)(1289MiB/10101msec) 00:18:57.950 slat (usec): min=19, max=105122, avg=1716.32, stdev=6577.30 00:18:57.950 clat (msec): min=15, max=296, avg=123.46, stdev=46.48 00:18:57.950 lat (msec): min=19, max=296, avg=125.18, stdev=47.38 00:18:57.950 clat percentiles (msec): 00:18:57.950 | 1.00th=[ 45], 5.00th=[ 56], 10.00th=[ 62], 20.00th=[ 71], 00:18:57.950 | 30.00th=[ 82], 40.00th=[ 123], 50.00th=[ 138], 60.00th=[ 144], 00:18:57.950 | 70.00th=[ 148], 80.00th=[ 157], 90.00th=[ 180], 95.00th=[ 205], 00:18:57.950 | 99.00th=[ 228], 99.50th=[ 234], 99.90th=[ 239], 99.95th=[ 255], 00:18:57.950 | 99.99th=[ 296] 00:18:57.950 bw ( KiB/s): min=68096, max=228864, per=7.55%, avg=130403.25, stdev=43762.93, samples=20 00:18:57.950 iops : min= 266, max= 894, avg=509.20, stdev=171.00, samples=20 00:18:57.950 lat (msec) : 20=0.04%, 50=2.56%, 100=32.44%, 250=64.88%, 500=0.08% 00:18:57.950 cpu : usr=0.23%, sys=1.82%, ctx=1083, majf=0, minf=4098 00:18:57.950 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:57.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.950 issued rwts: total=5157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.950 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.950 job5: (groupid=0, jobs=1): err= 0: pid=91118: Sun Dec 15 13:33:01 2024 00:18:57.950 read: IOPS=604, BW=151MiB/s (158MB/s)(1527MiB/10107msec) 00:18:57.950 slat (usec): min=15, max=82817, avg=1496.51, stdev=5710.07 00:18:57.950 clat (msec): min=9, max=253, avg=104.14, stdev=37.51 00:18:57.950 lat (msec): min=9, max=253, avg=105.64, stdev=38.32 00:18:57.950 clat percentiles (msec): 00:18:57.950 | 1.00th=[ 31], 5.00th=[ 53], 10.00th=[ 63], 20.00th=[ 70], 00:18:57.950 | 30.00th=[ 79], 40.00th=[ 88], 50.00th=[ 99], 60.00th=[ 108], 00:18:57.950 | 70.00th=[ 133], 80.00th=[ 146], 90.00th=[ 155], 95.00th=[ 163], 00:18:57.950 | 99.00th=[ 184], 99.50th=[ 209], 99.90th=[ 253], 99.95th=[ 253], 00:18:57.950 | 99.99th=[ 253] 00:18:57.950 bw ( KiB/s): min=96574, max=240609, per=8.96%, avg=154740.75, stdev=51368.60, samples=20 00:18:57.950 iops : min= 377, max= 939, avg=604.40, stdev=200.60, samples=20 00:18:57.950 lat (msec) : 10=0.08%, 20=0.03%, 50=4.39%, 100=47.96%, 250=47.36% 00:18:57.950 lat (msec) : 500=0.18% 00:18:57.950 cpu : usr=0.24%, sys=2.07%, ctx=1328, majf=0, minf=4097 00:18:57.950 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:57.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.950 issued rwts: total=6107,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.950 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.950 job6: (groupid=0, jobs=1): err= 0: pid=91119: Sun Dec 15 13:33:01 2024 00:18:57.950 read: IOPS=532, BW=133MiB/s (140MB/s)(1345MiB/10098msec) 00:18:57.950 slat (usec): min=16, max=150961, avg=1750.19, stdev=7098.87 00:18:57.950 clat (msec): min=21, max=254, avg=118.12, stdev=46.48 00:18:57.950 lat (msec): min=21, max=318, avg=119.87, stdev=47.55 00:18:57.950 clat percentiles (msec): 00:18:57.950 | 1.00th=[ 27], 5.00th=[ 37], 10.00th=[ 46], 20.00th=[ 69], 00:18:57.950 | 30.00th=[ 94], 40.00th=[ 111], 50.00th=[ 134], 60.00th=[ 142], 00:18:57.950 | 70.00th=[ 150], 80.00th=[ 157], 90.00th=[ 171], 95.00th=[ 184], 00:18:57.950 | 99.00th=[ 201], 99.50th=[ 213], 99.90th=[ 255], 99.95th=[ 255], 00:18:57.950 | 99.99th=[ 255] 00:18:57.950 bw ( KiB/s): min=82432, max=371943, per=7.89%, avg=136230.90, stdev=64202.21, samples=20 00:18:57.950 iops : min= 322, max= 1452, avg=531.85, stdev=250.71, samples=20 00:18:57.950 lat (msec) : 50=11.28%, 100=23.66%, 250=64.88%, 500=0.19% 00:18:57.950 cpu : usr=0.09%, sys=1.96%, ctx=1171, majf=0, minf=4097 00:18:57.950 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:57.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.950 issued rwts: total=5381,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.950 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.950 job7: (groupid=0, jobs=1): err= 0: pid=91120: Sun Dec 15 13:33:01 2024 00:18:57.950 read: IOPS=569, BW=142MiB/s (149MB/s)(1439MiB/10106msec) 00:18:57.950 slat (usec): min=16, max=81544, avg=1647.41, stdev=5908.89 00:18:57.950 clat (msec): min=27, max=222, avg=110.50, stdev=27.16 00:18:57.950 lat (msec): min=28, max=222, avg=112.14, stdev=28.03 00:18:57.950 clat percentiles (msec): 00:18:57.950 | 1.00th=[ 45], 5.00th=[ 64], 10.00th=[ 72], 20.00th=[ 85], 00:18:57.950 | 30.00th=[ 99], 40.00th=[ 109], 50.00th=[ 115], 60.00th=[ 122], 00:18:57.950 | 70.00th=[ 127], 80.00th=[ 133], 90.00th=[ 142], 95.00th=[ 148], 00:18:57.950 | 99.00th=[ 171], 99.50th=[ 186], 99.90th=[ 211], 99.95th=[ 222], 00:18:57.950 | 99.99th=[ 222] 00:18:57.950 bw ( KiB/s): min=104448, max=232448, per=8.44%, avg=145738.00, stdev=32632.20, samples=20 00:18:57.950 iops : min= 408, max= 908, avg=569.25, stdev=127.44, samples=20 00:18:57.950 lat (msec) : 50=1.48%, 100=30.65%, 250=67.88% 00:18:57.950 cpu : usr=0.19%, sys=2.18%, ctx=1238, majf=0, minf=4097 00:18:57.950 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:57.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.950 issued rwts: total=5756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.950 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.950 job8: (groupid=0, jobs=1): err= 0: pid=91121: Sun Dec 15 13:33:01 2024 00:18:57.950 read: IOPS=447, BW=112MiB/s (117MB/s)(1129MiB/10100msec) 00:18:57.950 slat (usec): min=15, max=75927, avg=2144.78, stdev=7107.71 00:18:57.950 clat (msec): min=5, max=253, avg=140.58, stdev=42.51 00:18:57.950 lat (msec): min=5, max=255, avg=142.73, stdev=43.58 00:18:57.950 clat percentiles (msec): 00:18:57.950 | 1.00th=[ 16], 5.00th=[ 35], 10.00th=[ 87], 20.00th=[ 127], 00:18:57.950 | 30.00th=[ 138], 40.00th=[ 144], 50.00th=[ 148], 60.00th=[ 153], 00:18:57.950 | 70.00th=[ 159], 80.00th=[ 167], 90.00th=[ 182], 95.00th=[ 197], 00:18:57.950 | 99.00th=[ 215], 99.50th=[ 226], 99.90th=[ 253], 99.95th=[ 253], 00:18:57.950 | 99.99th=[ 253] 00:18:57.950 bw ( KiB/s): min=76953, max=266752, per=6.60%, avg=114011.70, stdev=38140.35, samples=20 00:18:57.950 iops : min= 300, max= 1042, avg=445.05, stdev=149.04, samples=20 00:18:57.950 lat (msec) : 10=0.31%, 20=1.04%, 50=7.57%, 100=3.56%, 250=87.23% 00:18:57.950 lat (msec) : 500=0.29% 00:18:57.950 cpu : usr=0.14%, sys=1.57%, ctx=1105, majf=0, minf=4097 00:18:57.950 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:18:57.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.950 issued rwts: total=4517,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.950 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.950 job9: (groupid=0, jobs=1): err= 0: pid=91122: Sun Dec 15 13:33:01 2024 00:18:57.950 read: IOPS=1130, BW=283MiB/s (296MB/s)(2840MiB/10046msec) 00:18:57.950 slat (usec): min=17, max=58206, avg=839.71, stdev=3287.84 00:18:57.950 clat (usec): min=622, max=232697, avg=55617.75, stdev=22239.90 00:18:57.950 lat (usec): min=691, max=240902, avg=56457.46, stdev=22607.17 00:18:57.951 clat percentiles (msec): 00:18:57.951 | 1.00th=[ 8], 5.00th=[ 25], 10.00th=[ 31], 20.00th=[ 37], 00:18:57.951 | 30.00th=[ 45], 40.00th=[ 53], 50.00th=[ 57], 60.00th=[ 62], 00:18:57.951 | 70.00th=[ 65], 80.00th=[ 70], 90.00th=[ 77], 95.00th=[ 85], 00:18:57.951 | 99.00th=[ 130], 99.50th=[ 140], 99.90th=[ 232], 99.95th=[ 232], 00:18:57.951 | 99.99th=[ 232] 00:18:57.951 bw ( KiB/s): min=162304, max=480256, per=16.74%, avg=289176.00, stdev=82540.29, samples=20 00:18:57.951 iops : min= 634, max= 1876, avg=1129.55, stdev=322.43, samples=20 00:18:57.951 lat (usec) : 750=0.01%, 1000=0.01% 00:18:57.951 lat (msec) : 2=0.21%, 4=0.68%, 10=0.36%, 20=1.58%, 50=31.83% 00:18:57.951 lat (msec) : 100=62.82%, 250=2.50% 00:18:57.951 cpu : usr=0.33%, sys=3.91%, ctx=2319, majf=0, minf=4097 00:18:57.951 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:18:57.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.951 issued rwts: total=11361,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.951 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.951 job10: (groupid=0, jobs=1): err= 0: pid=91123: Sun Dec 15 13:33:01 2024 00:18:57.951 read: IOPS=579, BW=145MiB/s (152MB/s)(1466MiB/10111msec) 00:18:57.951 slat (usec): min=15, max=126463, avg=1617.22, stdev=6554.02 00:18:57.951 clat (msec): min=17, max=253, avg=108.55, stdev=41.35 00:18:57.951 lat (msec): min=17, max=334, avg=110.17, stdev=42.34 00:18:57.951 clat percentiles (msec): 00:18:57.951 | 1.00th=[ 29], 5.00th=[ 59], 10.00th=[ 63], 20.00th=[ 69], 00:18:57.951 | 30.00th=[ 77], 40.00th=[ 97], 50.00th=[ 112], 60.00th=[ 118], 00:18:57.951 | 70.00th=[ 124], 80.00th=[ 132], 90.00th=[ 176], 95.00th=[ 190], 00:18:57.951 | 99.00th=[ 215], 99.50th=[ 222], 99.90th=[ 234], 99.95th=[ 247], 00:18:57.951 | 99.99th=[ 253] 00:18:57.951 bw ( KiB/s): min=83800, max=261632, per=8.59%, avg=148432.25, stdev=50296.08, samples=20 00:18:57.951 iops : min= 327, max= 1022, avg=579.75, stdev=196.50, samples=20 00:18:57.951 lat (msec) : 20=0.27%, 50=2.59%, 100=37.92%, 250=59.19%, 500=0.02% 00:18:57.951 cpu : usr=0.19%, sys=2.06%, ctx=1129, majf=0, minf=4097 00:18:57.951 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:18:57.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.951 issued rwts: total=5862,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.951 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.951 00:18:57.951 Run status group 0 (all jobs): 00:18:57.951 READ: bw=1687MiB/s (1769MB/s), 112MiB/s-283MiB/s (117MB/s-296MB/s), io=16.7GiB (17.9GB), run=10046-10112msec 00:18:57.951 00:18:57.951 Disk stats (read/write): 00:18:57.951 nvme0n1: ios=11260/0, merge=0/0, ticks=1233744/0, in_queue=1233744, util=97.47% 00:18:57.951 nvme10n1: ios=9634/0, merge=0/0, ticks=1241130/0, in_queue=1241130, util=97.38% 00:18:57.951 nvme1n1: ios=9946/0, merge=0/0, ticks=1239078/0, in_queue=1239078, util=97.75% 00:18:57.951 nvme2n1: ios=16843/0, merge=0/0, ticks=1235469/0, in_queue=1235469, util=97.80% 00:18:57.951 nvme3n1: ios=10188/0, merge=0/0, ticks=1241687/0, in_queue=1241687, util=98.06% 00:18:57.951 nvme4n1: ios=12122/0, merge=0/0, ticks=1237010/0, in_queue=1237010, util=97.96% 00:18:57.951 nvme5n1: ios=10634/0, merge=0/0, ticks=1233127/0, in_queue=1233127, util=97.68% 00:18:57.951 nvme6n1: ios=11394/0, merge=0/0, ticks=1237423/0, in_queue=1237423, util=98.12% 00:18:57.951 nvme7n1: ios=8907/0, merge=0/0, ticks=1234503/0, in_queue=1234503, util=98.59% 00:18:57.951 nvme8n1: ios=22646/0, merge=0/0, ticks=1229481/0, in_queue=1229481, util=98.22% 00:18:57.951 nvme9n1: ios=11625/0, merge=0/0, ticks=1239147/0, in_queue=1239147, util=98.83% 00:18:57.951 13:33:01 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:18:57.951 [global] 00:18:57.951 thread=1 00:18:57.951 invalidate=1 00:18:57.951 rw=randwrite 00:18:57.951 time_based=1 00:18:57.951 runtime=10 00:18:57.951 ioengine=libaio 00:18:57.951 direct=1 00:18:57.951 bs=262144 00:18:57.951 iodepth=64 00:18:57.951 norandommap=1 00:18:57.951 numjobs=1 00:18:57.951 00:18:57.951 [job0] 00:18:57.951 filename=/dev/nvme0n1 00:18:57.951 [job1] 00:18:57.951 filename=/dev/nvme10n1 00:18:57.951 [job2] 00:18:57.951 filename=/dev/nvme1n1 00:18:57.951 [job3] 00:18:57.951 filename=/dev/nvme2n1 00:18:57.951 [job4] 00:18:57.951 filename=/dev/nvme3n1 00:18:57.951 [job5] 00:18:57.951 filename=/dev/nvme4n1 00:18:57.951 [job6] 00:18:57.951 filename=/dev/nvme5n1 00:18:57.951 [job7] 00:18:57.951 filename=/dev/nvme6n1 00:18:57.951 [job8] 00:18:57.951 filename=/dev/nvme7n1 00:18:57.951 [job9] 00:18:57.951 filename=/dev/nvme8n1 00:18:57.951 [job10] 00:18:57.951 filename=/dev/nvme9n1 00:18:57.951 Could not set queue depth (nvme0n1) 00:18:57.951 Could not set queue depth (nvme10n1) 00:18:57.951 Could not set queue depth (nvme1n1) 00:18:57.951 Could not set queue depth (nvme2n1) 00:18:57.951 Could not set queue depth (nvme3n1) 00:18:57.951 Could not set queue depth (nvme4n1) 00:18:57.951 Could not set queue depth (nvme5n1) 00:18:57.951 Could not set queue depth (nvme6n1) 00:18:57.951 Could not set queue depth (nvme7n1) 00:18:57.951 Could not set queue depth (nvme8n1) 00:18:57.951 Could not set queue depth (nvme9n1) 00:18:57.951 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:57.951 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:57.951 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:57.951 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:57.951 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:57.951 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:57.951 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:57.951 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:57.951 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:57.951 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:57.951 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:57.951 fio-3.35 00:18:57.951 Starting 11 threads 00:19:07.931 00:19:07.931 job0: (groupid=0, jobs=1): err= 0: pid=91317: Sun Dec 15 13:33:12 2024 00:19:07.931 write: IOPS=272, BW=68.1MiB/s (71.5MB/s)(695MiB/10199msec); 0 zone resets 00:19:07.931 slat (usec): min=25, max=48424, avg=3535.07, stdev=6771.46 00:19:07.931 clat (msec): min=18, max=450, avg=231.12, stdev=42.02 00:19:07.931 lat (msec): min=18, max=450, avg=234.65, stdev=42.19 00:19:07.931 clat percentiles (msec): 00:19:07.931 | 1.00th=[ 65], 5.00th=[ 142], 10.00th=[ 194], 20.00th=[ 220], 00:19:07.931 | 30.00th=[ 230], 40.00th=[ 236], 50.00th=[ 243], 60.00th=[ 247], 00:19:07.931 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 255], 95.00th=[ 262], 00:19:07.931 | 99.00th=[ 342], 99.50th=[ 405], 99.90th=[ 435], 99.95th=[ 451], 00:19:07.931 | 99.99th=[ 451] 00:19:07.931 bw ( KiB/s): min=63361, max=112128, per=4.59%, avg=69548.35, stdev=10503.38, samples=20 00:19:07.931 iops : min= 247, max= 438, avg=271.60, stdev=41.05, samples=20 00:19:07.931 lat (msec) : 20=0.14%, 50=0.58%, 100=1.83%, 250=70.36%, 500=27.09% 00:19:07.931 cpu : usr=0.72%, sys=0.83%, ctx=2882, majf=0, minf=1 00:19:07.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:19:07.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:07.931 issued rwts: total=0,2780,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.931 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:07.931 job1: (groupid=0, jobs=1): err= 0: pid=91318: Sun Dec 15 13:33:12 2024 00:19:07.931 write: IOPS=1417, BW=354MiB/s (371MB/s)(3557MiB/10040msec); 0 zone resets 00:19:07.931 slat (usec): min=18, max=18535, avg=698.33, stdev=1166.96 00:19:07.931 clat (usec): min=7623, max=81003, avg=44411.44, stdev=2677.92 00:19:07.931 lat (usec): min=8054, max=83650, avg=45109.77, stdev=2708.17 00:19:07.931 clat percentiles (usec): 00:19:07.931 | 1.00th=[40109], 5.00th=[42206], 10.00th=[42730], 20.00th=[43254], 00:19:07.931 | 30.00th=[43779], 40.00th=[43779], 50.00th=[44303], 60.00th=[44827], 00:19:07.931 | 70.00th=[45351], 80.00th=[45876], 90.00th=[46400], 95.00th=[46924], 00:19:07.931 | 99.00th=[47973], 99.50th=[49546], 99.90th=[72877], 99.95th=[78119], 00:19:07.931 | 99.99th=[81265] 00:19:07.931 bw ( KiB/s): min=356864, max=367104, per=23.92%, avg=362514.85, stdev=2919.21, samples=20 00:19:07.931 iops : min= 1394, max= 1434, avg=1416.05, stdev=11.39, samples=20 00:19:07.931 lat (msec) : 10=0.08%, 20=0.14%, 50=99.30%, 100=0.48% 00:19:07.931 cpu : usr=2.30%, sys=3.35%, ctx=17357, majf=0, minf=1 00:19:07.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:19:07.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:07.931 issued rwts: total=0,14228,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.931 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:07.931 job2: (groupid=0, jobs=1): err= 0: pid=91330: Sun Dec 15 13:33:12 2024 00:19:07.931 write: IOPS=522, BW=131MiB/s (137MB/s)(1318MiB/10092msec); 0 zone resets 00:19:07.931 slat (usec): min=24, max=11875, avg=1843.11, stdev=3216.51 00:19:07.931 clat (msec): min=19, max=213, avg=120.63, stdev=15.45 00:19:07.931 lat (msec): min=19, max=213, avg=122.48, stdev=15.45 00:19:07.931 clat percentiles (msec): 00:19:07.931 | 1.00th=[ 41], 5.00th=[ 111], 10.00th=[ 116], 20.00th=[ 118], 00:19:07.931 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 125], 60.00th=[ 125], 00:19:07.931 | 70.00th=[ 126], 80.00th=[ 126], 90.00th=[ 127], 95.00th=[ 127], 00:19:07.931 | 99.00th=[ 159], 99.50th=[ 167], 99.90th=[ 205], 99.95th=[ 207], 00:19:07.931 | 99.99th=[ 213] 00:19:07.931 bw ( KiB/s): min=128766, max=164864, per=8.79%, avg=133298.55, stdev=7613.34, samples=20 00:19:07.931 iops : min= 502, max= 644, avg=520.60, stdev=29.77, samples=20 00:19:07.931 lat (msec) : 20=0.02%, 50=1.75%, 100=2.45%, 250=95.79% 00:19:07.931 cpu : usr=1.38%, sys=1.54%, ctx=6492, majf=0, minf=1 00:19:07.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:07.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:07.931 issued rwts: total=0,5271,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.931 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:07.931 job3: (groupid=0, jobs=1): err= 0: pid=91331: Sun Dec 15 13:33:12 2024 00:19:07.931 write: IOPS=263, BW=65.9MiB/s (69.1MB/s)(672MiB/10207msec); 0 zone resets 00:19:07.931 slat (usec): min=24, max=55406, avg=3712.90, stdev=7078.57 00:19:07.931 clat (msec): min=4, max=448, avg=239.07, stdev=35.10 00:19:07.931 lat (msec): min=4, max=448, avg=242.78, stdev=34.75 00:19:07.931 clat percentiles (msec): 00:19:07.931 | 1.00th=[ 67], 5.00th=[ 201], 10.00th=[ 213], 20.00th=[ 226], 00:19:07.931 | 30.00th=[ 234], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 249], 00:19:07.931 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 259], 95.00th=[ 264], 00:19:07.931 | 99.00th=[ 347], 99.50th=[ 405], 99.90th=[ 435], 99.95th=[ 447], 00:19:07.931 | 99.99th=[ 451] 00:19:07.931 bw ( KiB/s): min=63361, max=74240, per=4.43%, avg=67185.25, stdev=3043.91, samples=20 00:19:07.931 iops : min= 247, max= 290, avg=262.30, stdev=11.90, samples=20 00:19:07.931 lat (msec) : 10=0.26%, 20=0.07%, 50=0.45%, 100=0.74%, 250=64.60% 00:19:07.931 lat (msec) : 500=33.88% 00:19:07.931 cpu : usr=0.75%, sys=0.90%, ctx=3278, majf=0, minf=1 00:19:07.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:19:07.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:07.931 issued rwts: total=0,2689,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.931 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:07.931 job4: (groupid=0, jobs=1): err= 0: pid=91332: Sun Dec 15 13:33:12 2024 00:19:07.931 write: IOPS=255, BW=63.9MiB/s (67.0MB/s)(652MiB/10199msec); 0 zone resets 00:19:07.931 slat (usec): min=24, max=69431, avg=3830.78, stdev=7503.07 00:19:07.931 clat (msec): min=14, max=447, avg=246.31, stdev=35.29 00:19:07.931 lat (msec): min=14, max=447, avg=250.14, stdev=34.85 00:19:07.931 clat percentiles (msec): 00:19:07.931 | 1.00th=[ 75], 5.00th=[ 203], 10.00th=[ 215], 20.00th=[ 230], 00:19:07.931 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 257], 00:19:07.931 | 70.00th=[ 262], 80.00th=[ 264], 90.00th=[ 268], 95.00th=[ 271], 00:19:07.931 | 99.00th=[ 342], 99.50th=[ 401], 99.90th=[ 435], 99.95th=[ 447], 00:19:07.931 | 99.99th=[ 447] 00:19:07.931 bw ( KiB/s): min=59392, max=73728, per=4.30%, avg=65119.20, stdev=3764.24, samples=20 00:19:07.931 iops : min= 232, max= 288, avg=254.25, stdev=14.70, samples=20 00:19:07.931 lat (msec) : 20=0.12%, 50=0.46%, 100=0.77%, 250=39.80%, 500=58.86% 00:19:07.931 cpu : usr=0.83%, sys=0.93%, ctx=1523, majf=0, minf=1 00:19:07.931 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:19:07.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:07.931 issued rwts: total=0,2608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.931 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:07.931 job5: (groupid=0, jobs=1): err= 0: pid=91333: Sun Dec 15 13:33:12 2024 00:19:07.931 write: IOPS=267, BW=67.0MiB/s (70.2MB/s)(683MiB/10199msec); 0 zone resets 00:19:07.931 slat (usec): min=18, max=48861, avg=3610.02, stdev=6936.24 00:19:07.931 clat (msec): min=5, max=440, avg=235.02, stdev=44.03 00:19:07.931 lat (msec): min=5, max=440, avg=238.63, stdev=44.18 00:19:07.931 clat percentiles (msec): 00:19:07.931 | 1.00th=[ 35], 5.00th=[ 138], 10.00th=[ 205], 20.00th=[ 224], 00:19:07.931 | 30.00th=[ 234], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 249], 00:19:07.931 | 70.00th=[ 253], 80.00th=[ 257], 90.00th=[ 266], 95.00th=[ 268], 00:19:07.931 | 99.00th=[ 326], 99.50th=[ 397], 99.90th=[ 426], 99.95th=[ 443], 00:19:07.932 | 99.99th=[ 443] 00:19:07.932 bw ( KiB/s): min=61440, max=109056, per=4.51%, avg=68331.75, stdev=9844.00, samples=20 00:19:07.932 iops : min= 240, max= 426, avg=266.85, stdev=38.46, samples=20 00:19:07.932 lat (msec) : 10=0.15%, 20=0.62%, 50=0.48%, 100=1.68%, 250=61.62% 00:19:07.932 lat (msec) : 500=35.46% 00:19:07.932 cpu : usr=0.68%, sys=0.78%, ctx=3363, majf=0, minf=1 00:19:07.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:19:07.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:07.932 issued rwts: total=0,2733,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.932 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:07.932 job6: (groupid=0, jobs=1): err= 0: pid=91334: Sun Dec 15 13:33:12 2024 00:19:07.932 write: IOPS=1424, BW=356MiB/s (373MB/s)(3576MiB/10039msec); 0 zone resets 00:19:07.932 slat (usec): min=17, max=47086, avg=694.24, stdev=1241.96 00:19:07.932 clat (msec): min=37, max=121, avg=44.21, stdev= 4.08 00:19:07.932 lat (msec): min=39, max=125, avg=44.91, stdev= 4.16 00:19:07.932 clat percentiles (msec): 00:19:07.932 | 1.00th=[ 41], 5.00th=[ 42], 10.00th=[ 43], 20.00th=[ 43], 00:19:07.932 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:19:07.932 | 70.00th=[ 45], 80.00th=[ 46], 90.00th=[ 46], 95.00th=[ 47], 00:19:07.932 | 99.00th=[ 65], 99.50th=[ 80], 99.90th=[ 84], 99.95th=[ 89], 00:19:07.932 | 99.99th=[ 122] 00:19:07.932 bw ( KiB/s): min=283136, max=374272, per=24.04%, avg=364371.65, stdev=19517.56, samples=20 00:19:07.932 iops : min= 1106, max= 1462, avg=1423.30, stdev=76.24, samples=20 00:19:07.932 lat (msec) : 50=97.92%, 100=2.06%, 250=0.03% 00:19:07.932 cpu : usr=2.23%, sys=3.60%, ctx=18810, majf=0, minf=1 00:19:07.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:19:07.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:07.932 issued rwts: total=0,14302,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.932 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:07.932 job7: (groupid=0, jobs=1): err= 0: pid=91335: Sun Dec 15 13:33:12 2024 00:19:07.932 write: IOPS=508, BW=127MiB/s (133MB/s)(1286MiB/10103msec); 0 zone resets 00:19:07.932 slat (usec): min=20, max=36540, avg=1938.21, stdev=3341.91 00:19:07.932 clat (msec): min=3, max=222, avg=123.75, stdev=14.03 00:19:07.932 lat (msec): min=3, max=222, avg=125.69, stdev=13.84 00:19:07.932 clat percentiles (msec): 00:19:07.932 | 1.00th=[ 96], 5.00th=[ 115], 10.00th=[ 117], 20.00th=[ 118], 00:19:07.932 | 30.00th=[ 123], 40.00th=[ 124], 50.00th=[ 125], 60.00th=[ 125], 00:19:07.932 | 70.00th=[ 126], 80.00th=[ 126], 90.00th=[ 127], 95.00th=[ 128], 00:19:07.932 | 99.00th=[ 188], 99.50th=[ 192], 99.90th=[ 215], 99.95th=[ 215], 00:19:07.932 | 99.99th=[ 224] 00:19:07.932 bw ( KiB/s): min=98304, max=134656, per=8.57%, avg=129982.85, stdev=7655.81, samples=20 00:19:07.932 iops : min= 384, max= 526, avg=507.60, stdev=29.87, samples=20 00:19:07.932 lat (msec) : 4=0.08%, 10=0.08%, 20=0.16%, 50=0.23%, 100=0.47% 00:19:07.932 lat (msec) : 250=98.99% 00:19:07.932 cpu : usr=1.31%, sys=1.79%, ctx=6399, majf=0, minf=1 00:19:07.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:07.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:07.932 issued rwts: total=0,5142,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.932 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:07.932 job8: (groupid=0, jobs=1): err= 0: pid=91336: Sun Dec 15 13:33:12 2024 00:19:07.932 write: IOPS=254, BW=63.7MiB/s (66.8MB/s)(650MiB/10200msec); 0 zone resets 00:19:07.932 slat (usec): min=26, max=65511, avg=3844.39, stdev=7453.27 00:19:07.932 clat (msec): min=9, max=428, avg=247.21, stdev=33.87 00:19:07.932 lat (msec): min=9, max=428, avg=251.05, stdev=33.39 00:19:07.932 clat percentiles (msec): 00:19:07.932 | 1.00th=[ 95], 5.00th=[ 203], 10.00th=[ 215], 20.00th=[ 230], 00:19:07.932 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 259], 00:19:07.932 | 70.00th=[ 262], 80.00th=[ 266], 90.00th=[ 271], 95.00th=[ 275], 00:19:07.932 | 99.00th=[ 338], 99.50th=[ 380], 99.90th=[ 414], 99.95th=[ 430], 00:19:07.932 | 99.99th=[ 430] 00:19:07.932 bw ( KiB/s): min=57856, max=78336, per=4.28%, avg=64890.65, stdev=4662.06, samples=20 00:19:07.932 iops : min= 226, max= 306, avg=253.40, stdev=18.22, samples=20 00:19:07.932 lat (msec) : 10=0.04%, 20=0.15%, 50=0.46%, 100=0.46%, 250=39.26% 00:19:07.932 lat (msec) : 500=59.62% 00:19:07.932 cpu : usr=0.88%, sys=0.87%, ctx=2454, majf=0, minf=1 00:19:07.932 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:19:07.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:07.932 issued rwts: total=0,2598,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.932 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:07.932 job9: (groupid=0, jobs=1): err= 0: pid=91337: Sun Dec 15 13:33:12 2024 00:19:07.932 write: IOPS=289, BW=72.4MiB/s (75.9MB/s)(739MiB/10202msec); 0 zone resets 00:19:07.932 slat (usec): min=24, max=41669, avg=3330.57, stdev=6078.44 00:19:07.932 clat (msec): min=16, max=423, avg=217.49, stdev=35.86 00:19:07.932 lat (msec): min=16, max=424, avg=220.82, stdev=35.98 00:19:07.932 clat percentiles (msec): 00:19:07.932 | 1.00th=[ 73], 5.00th=[ 155], 10.00th=[ 190], 20.00th=[ 207], 00:19:07.932 | 30.00th=[ 213], 40.00th=[ 220], 50.00th=[ 224], 60.00th=[ 228], 00:19:07.932 | 70.00th=[ 232], 80.00th=[ 236], 90.00th=[ 243], 95.00th=[ 247], 00:19:07.932 | 99.00th=[ 321], 99.50th=[ 363], 99.90th=[ 409], 99.95th=[ 426], 00:19:07.932 | 99.99th=[ 426] 00:19:07.932 bw ( KiB/s): min=67449, max=109274, per=4.88%, avg=73972.75, stdev=9020.58, samples=20 00:19:07.932 iops : min= 263, max= 426, avg=288.85, stdev=35.07, samples=20 00:19:07.932 lat (msec) : 20=0.14%, 50=0.54%, 100=1.62%, 250=95.84%, 500=1.86% 00:19:07.932 cpu : usr=0.87%, sys=0.78%, ctx=3429, majf=0, minf=1 00:19:07.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:19:07.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:07.932 issued rwts: total=0,2954,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.932 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:07.932 job10: (groupid=0, jobs=1): err= 0: pid=91338: Sun Dec 15 13:33:12 2024 00:19:07.932 write: IOPS=508, BW=127MiB/s (133MB/s)(1283MiB/10096msec); 0 zone resets 00:19:07.932 slat (usec): min=20, max=38786, avg=1943.15, stdev=3340.35 00:19:07.932 clat (msec): min=42, max=214, avg=123.89, stdev=10.37 00:19:07.932 lat (msec): min=42, max=214, avg=125.84, stdev= 9.99 00:19:07.932 clat percentiles (msec): 00:19:07.932 | 1.00th=[ 111], 5.00th=[ 116], 10.00th=[ 117], 20.00th=[ 118], 00:19:07.932 | 30.00th=[ 123], 40.00th=[ 124], 50.00th=[ 125], 60.00th=[ 125], 00:19:07.932 | 70.00th=[ 126], 80.00th=[ 126], 90.00th=[ 127], 95.00th=[ 128], 00:19:07.932 | 99.00th=[ 176], 99.50th=[ 182], 99.90th=[ 207], 99.95th=[ 207], 00:19:07.932 | 99.99th=[ 215] 00:19:07.932 bw ( KiB/s): min=92672, max=134656, per=8.56%, avg=129765.70, stdev=8814.01, samples=20 00:19:07.932 iops : min= 362, max= 526, avg=506.75, stdev=34.41, samples=20 00:19:07.932 lat (msec) : 50=0.02%, 100=0.56%, 250=99.42% 00:19:07.932 cpu : usr=1.28%, sys=1.44%, ctx=6290, majf=0, minf=1 00:19:07.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:07.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:07.932 issued rwts: total=0,5133,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.932 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:07.932 00:19:07.932 Run status group 0 (all jobs): 00:19:07.932 WRITE: bw=1480MiB/s (1552MB/s), 63.7MiB/s-356MiB/s (66.8MB/s-373MB/s), io=14.8GiB (15.8GB), run=10039-10207msec 00:19:07.932 00:19:07.932 Disk stats (read/write): 00:19:07.932 nvme0n1: ios=49/5426, merge=0/0, ticks=48/1203711, in_queue=1203759, util=97.73% 00:19:07.932 nvme10n1: ios=49/28285, merge=0/0, ticks=47/1216842, in_queue=1216889, util=97.94% 00:19:07.932 nvme1n1: ios=31/10392, merge=0/0, ticks=26/1212782, in_queue=1212808, util=97.92% 00:19:07.932 nvme2n1: ios=13/5252, merge=0/0, ticks=21/1204830, in_queue=1204851, util=98.09% 00:19:07.932 nvme3n1: ios=26/5087, merge=0/0, ticks=14/1202477, in_queue=1202491, util=98.05% 00:19:07.932 nvme4n1: ios=0/5335, merge=0/0, ticks=0/1204514, in_queue=1204514, util=98.26% 00:19:07.932 nvme5n1: ios=0/28386, merge=0/0, ticks=0/1217610, in_queue=1217610, util=98.24% 00:19:07.932 nvme6n1: ios=0/10155, merge=0/0, ticks=0/1214463, in_queue=1214463, util=98.55% 00:19:07.932 nvme7n1: ios=0/5062, merge=0/0, ticks=0/1203667, in_queue=1203667, util=98.66% 00:19:07.932 nvme8n1: ios=0/5771, merge=0/0, ticks=0/1206517, in_queue=1206517, util=98.80% 00:19:07.932 nvme9n1: ios=0/10114, merge=0/0, ticks=0/1212577, in_queue=1212577, util=98.87% 00:19:07.932 13:33:12 -- target/multiconnection.sh@36 -- # sync 00:19:07.932 13:33:12 -- target/multiconnection.sh@37 -- # seq 1 11 00:19:07.932 13:33:12 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:07.932 13:33:12 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:07.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:07.932 13:33:12 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:19:07.932 13:33:12 -- common/autotest_common.sh@1208 -- # local i=0 00:19:07.932 13:33:12 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:07.932 13:33:12 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:19:07.933 13:33:12 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:07.933 13:33:12 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:19:07.933 13:33:12 -- common/autotest_common.sh@1220 -- # return 0 00:19:07.933 13:33:12 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:07.933 13:33:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.933 13:33:12 -- common/autotest_common.sh@10 -- # set +x 00:19:07.933 13:33:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.933 13:33:12 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:07.933 13:33:12 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:07.933 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:07.933 13:33:12 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:19:07.933 13:33:12 -- common/autotest_common.sh@1208 -- # local i=0 00:19:07.933 13:33:12 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:07.933 13:33:12 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:19:07.933 13:33:12 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:07.933 13:33:12 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:19:07.933 13:33:12 -- common/autotest_common.sh@1220 -- # return 0 00:19:07.933 13:33:12 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:07.933 13:33:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.933 13:33:12 -- common/autotest_common.sh@10 -- # set +x 00:19:07.933 13:33:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.933 13:33:12 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:07.933 13:33:12 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:07.933 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:07.933 13:33:12 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:19:07.933 13:33:12 -- common/autotest_common.sh@1208 -- # local i=0 00:19:07.933 13:33:12 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:07.933 13:33:12 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:19:07.933 13:33:12 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:19:07.933 13:33:12 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:07.933 13:33:12 -- common/autotest_common.sh@1220 -- # return 0 00:19:07.933 13:33:12 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:07.933 13:33:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.933 13:33:12 -- common/autotest_common.sh@10 -- # set +x 00:19:07.933 13:33:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.933 13:33:12 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:07.933 13:33:12 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:07.933 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:07.933 13:33:12 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:19:07.933 13:33:12 -- common/autotest_common.sh@1208 -- # local i=0 00:19:07.933 13:33:12 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:07.933 13:33:12 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:19:07.933 13:33:12 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:07.933 13:33:12 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:19:07.933 13:33:12 -- common/autotest_common.sh@1220 -- # return 0 00:19:07.933 13:33:12 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:07.933 13:33:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.933 13:33:12 -- common/autotest_common.sh@10 -- # set +x 00:19:07.933 13:33:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.933 13:33:12 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:07.933 13:33:12 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:07.933 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:07.933 13:33:13 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:19:07.933 13:33:13 -- common/autotest_common.sh@1208 -- # local i=0 00:19:07.933 13:33:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:07.933 13:33:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:19:07.933 13:33:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:19:07.933 13:33:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:07.933 13:33:13 -- common/autotest_common.sh@1220 -- # return 0 00:19:07.933 13:33:13 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:07.933 13:33:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.933 13:33:13 -- common/autotest_common.sh@10 -- # set +x 00:19:07.933 13:33:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.933 13:33:13 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:07.933 13:33:13 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:19:07.933 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:19:07.933 13:33:13 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:19:07.933 13:33:13 -- common/autotest_common.sh@1208 -- # local i=0 00:19:07.933 13:33:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:07.933 13:33:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:19:07.933 13:33:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:07.933 13:33:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:19:07.933 13:33:13 -- common/autotest_common.sh@1220 -- # return 0 00:19:07.933 13:33:13 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:19:07.933 13:33:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.933 13:33:13 -- common/autotest_common.sh@10 -- # set +x 00:19:07.933 13:33:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.933 13:33:13 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:07.933 13:33:13 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:19:07.933 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:19:07.933 13:33:13 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:19:07.933 13:33:13 -- common/autotest_common.sh@1208 -- # local i=0 00:19:07.933 13:33:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:07.933 13:33:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:19:07.933 13:33:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:19:07.933 13:33:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:07.933 13:33:13 -- common/autotest_common.sh@1220 -- # return 0 00:19:07.933 13:33:13 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:19:07.933 13:33:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.933 13:33:13 -- common/autotest_common.sh@10 -- # set +x 00:19:07.933 13:33:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.933 13:33:13 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:07.933 13:33:13 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:19:07.933 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:19:07.933 13:33:13 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:19:07.933 13:33:13 -- common/autotest_common.sh@1208 -- # local i=0 00:19:07.933 13:33:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:07.933 13:33:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:19:07.933 13:33:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:07.933 13:33:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:19:07.933 13:33:13 -- common/autotest_common.sh@1220 -- # return 0 00:19:07.933 13:33:13 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:19:07.933 13:33:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.933 13:33:13 -- common/autotest_common.sh@10 -- # set +x 00:19:07.933 13:33:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.933 13:33:13 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:07.933 13:33:13 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:19:07.933 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:19:07.933 13:33:13 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:19:07.933 13:33:13 -- common/autotest_common.sh@1208 -- # local i=0 00:19:07.933 13:33:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:07.933 13:33:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:19:07.933 13:33:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:07.933 13:33:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:19:07.933 13:33:13 -- common/autotest_common.sh@1220 -- # return 0 00:19:07.933 13:33:13 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:19:07.933 13:33:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.933 13:33:13 -- common/autotest_common.sh@10 -- # set +x 00:19:07.933 13:33:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.933 13:33:13 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:07.933 13:33:13 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:19:07.933 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:19:07.933 13:33:13 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:19:07.933 13:33:13 -- common/autotest_common.sh@1208 -- # local i=0 00:19:07.933 13:33:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:07.933 13:33:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:19:07.933 13:33:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:07.933 13:33:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:19:07.933 13:33:13 -- common/autotest_common.sh@1220 -- # return 0 00:19:07.933 13:33:13 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:19:07.933 13:33:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.933 13:33:13 -- common/autotest_common.sh@10 -- # set +x 00:19:07.933 13:33:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.934 13:33:13 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:07.934 13:33:13 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:19:08.192 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:19:08.192 13:33:13 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:19:08.192 13:33:13 -- common/autotest_common.sh@1208 -- # local i=0 00:19:08.192 13:33:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:08.192 13:33:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:19:08.192 13:33:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:08.192 13:33:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:19:08.192 13:33:13 -- common/autotest_common.sh@1220 -- # return 0 00:19:08.192 13:33:13 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:19:08.192 13:33:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.192 13:33:13 -- common/autotest_common.sh@10 -- # set +x 00:19:08.192 13:33:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.192 13:33:13 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:19:08.192 13:33:13 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:19:08.192 13:33:13 -- target/multiconnection.sh@47 -- # nvmftestfini 00:19:08.192 13:33:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:08.192 13:33:13 -- nvmf/common.sh@116 -- # sync 00:19:08.192 13:33:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:08.192 13:33:13 -- nvmf/common.sh@119 -- # set +e 00:19:08.192 13:33:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:08.192 13:33:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:08.192 rmmod nvme_tcp 00:19:08.192 rmmod nvme_fabrics 00:19:08.192 rmmod nvme_keyring 00:19:08.192 13:33:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:08.192 13:33:13 -- nvmf/common.sh@123 -- # set -e 00:19:08.192 13:33:13 -- nvmf/common.sh@124 -- # return 0 00:19:08.192 13:33:13 -- nvmf/common.sh@477 -- # '[' -n 90633 ']' 00:19:08.193 13:33:13 -- nvmf/common.sh@478 -- # killprocess 90633 00:19:08.193 13:33:13 -- common/autotest_common.sh@936 -- # '[' -z 90633 ']' 00:19:08.193 13:33:13 -- common/autotest_common.sh@940 -- # kill -0 90633 00:19:08.193 13:33:13 -- common/autotest_common.sh@941 -- # uname 00:19:08.193 13:33:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:08.193 13:33:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90633 00:19:08.193 killing process with pid 90633 00:19:08.193 13:33:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:08.193 13:33:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:08.193 13:33:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90633' 00:19:08.193 13:33:13 -- common/autotest_common.sh@955 -- # kill 90633 00:19:08.193 13:33:13 -- common/autotest_common.sh@960 -- # wait 90633 00:19:08.760 13:33:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:08.760 13:33:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:08.760 13:33:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:08.760 13:33:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:08.760 13:33:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:08.760 13:33:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.760 13:33:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:08.760 13:33:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.760 13:33:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:08.760 00:19:08.760 real 0m49.882s 00:19:08.760 user 2m45.884s 00:19:08.760 sys 0m27.233s 00:19:08.760 13:33:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:08.760 13:33:14 -- common/autotest_common.sh@10 -- # set +x 00:19:08.760 ************************************ 00:19:08.760 END TEST nvmf_multiconnection 00:19:08.760 ************************************ 00:19:08.760 13:33:14 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:08.760 13:33:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:08.760 13:33:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:08.760 13:33:14 -- common/autotest_common.sh@10 -- # set +x 00:19:08.760 ************************************ 00:19:08.760 START TEST nvmf_initiator_timeout 00:19:08.760 ************************************ 00:19:08.760 13:33:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:09.019 * Looking for test storage... 00:19:09.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:09.019 13:33:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:09.019 13:33:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:09.019 13:33:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:09.019 13:33:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:09.019 13:33:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:09.019 13:33:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:09.019 13:33:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:09.019 13:33:14 -- scripts/common.sh@335 -- # IFS=.-: 00:19:09.019 13:33:14 -- scripts/common.sh@335 -- # read -ra ver1 00:19:09.019 13:33:14 -- scripts/common.sh@336 -- # IFS=.-: 00:19:09.019 13:33:14 -- scripts/common.sh@336 -- # read -ra ver2 00:19:09.019 13:33:14 -- scripts/common.sh@337 -- # local 'op=<' 00:19:09.019 13:33:14 -- scripts/common.sh@339 -- # ver1_l=2 00:19:09.019 13:33:14 -- scripts/common.sh@340 -- # ver2_l=1 00:19:09.019 13:33:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:09.019 13:33:14 -- scripts/common.sh@343 -- # case "$op" in 00:19:09.019 13:33:14 -- scripts/common.sh@344 -- # : 1 00:19:09.019 13:33:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:09.019 13:33:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:09.019 13:33:14 -- scripts/common.sh@364 -- # decimal 1 00:19:09.019 13:33:14 -- scripts/common.sh@352 -- # local d=1 00:19:09.019 13:33:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:09.019 13:33:14 -- scripts/common.sh@354 -- # echo 1 00:19:09.019 13:33:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:09.019 13:33:14 -- scripts/common.sh@365 -- # decimal 2 00:19:09.019 13:33:14 -- scripts/common.sh@352 -- # local d=2 00:19:09.019 13:33:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:09.019 13:33:14 -- scripts/common.sh@354 -- # echo 2 00:19:09.019 13:33:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:09.019 13:33:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:09.019 13:33:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:09.019 13:33:14 -- scripts/common.sh@367 -- # return 0 00:19:09.019 13:33:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:09.019 13:33:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:09.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.019 --rc genhtml_branch_coverage=1 00:19:09.019 --rc genhtml_function_coverage=1 00:19:09.019 --rc genhtml_legend=1 00:19:09.019 --rc geninfo_all_blocks=1 00:19:09.019 --rc geninfo_unexecuted_blocks=1 00:19:09.019 00:19:09.019 ' 00:19:09.019 13:33:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:09.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.019 --rc genhtml_branch_coverage=1 00:19:09.019 --rc genhtml_function_coverage=1 00:19:09.019 --rc genhtml_legend=1 00:19:09.019 --rc geninfo_all_blocks=1 00:19:09.019 --rc geninfo_unexecuted_blocks=1 00:19:09.019 00:19:09.019 ' 00:19:09.019 13:33:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:09.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.019 --rc genhtml_branch_coverage=1 00:19:09.019 --rc genhtml_function_coverage=1 00:19:09.019 --rc genhtml_legend=1 00:19:09.019 --rc geninfo_all_blocks=1 00:19:09.019 --rc geninfo_unexecuted_blocks=1 00:19:09.019 00:19:09.019 ' 00:19:09.019 13:33:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:09.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.019 --rc genhtml_branch_coverage=1 00:19:09.019 --rc genhtml_function_coverage=1 00:19:09.019 --rc genhtml_legend=1 00:19:09.019 --rc geninfo_all_blocks=1 00:19:09.019 --rc geninfo_unexecuted_blocks=1 00:19:09.019 00:19:09.019 ' 00:19:09.019 13:33:14 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:09.019 13:33:14 -- nvmf/common.sh@7 -- # uname -s 00:19:09.019 13:33:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:09.019 13:33:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:09.019 13:33:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:09.019 13:33:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:09.019 13:33:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:09.019 13:33:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:09.019 13:33:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:09.019 13:33:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:09.019 13:33:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:09.019 13:33:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:09.019 13:33:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:19:09.019 13:33:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:19:09.019 13:33:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:09.019 13:33:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:09.019 13:33:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:09.019 13:33:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:09.019 13:33:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:09.019 13:33:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:09.019 13:33:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:09.019 13:33:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.019 13:33:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.019 13:33:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.019 13:33:14 -- paths/export.sh@5 -- # export PATH 00:19:09.019 13:33:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.019 13:33:14 -- nvmf/common.sh@46 -- # : 0 00:19:09.019 13:33:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:09.019 13:33:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:09.019 13:33:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:09.019 13:33:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:09.019 13:33:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:09.019 13:33:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:09.019 13:33:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:09.019 13:33:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:09.019 13:33:14 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:09.020 13:33:14 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:09.020 13:33:14 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:19:09.020 13:33:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:09.020 13:33:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:09.020 13:33:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:09.020 13:33:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:09.020 13:33:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:09.020 13:33:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.020 13:33:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:09.020 13:33:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.020 13:33:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:09.020 13:33:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:09.020 13:33:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:09.020 13:33:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:09.020 13:33:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:09.020 13:33:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:09.020 13:33:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:09.020 13:33:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:09.020 13:33:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:09.020 13:33:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:09.020 13:33:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:09.020 13:33:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:09.020 13:33:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:09.020 13:33:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:09.020 13:33:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:09.020 13:33:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:09.020 13:33:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:09.020 13:33:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:09.020 13:33:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:09.020 13:33:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:09.020 Cannot find device "nvmf_tgt_br" 00:19:09.020 13:33:14 -- nvmf/common.sh@154 -- # true 00:19:09.020 13:33:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:09.020 Cannot find device "nvmf_tgt_br2" 00:19:09.020 13:33:14 -- nvmf/common.sh@155 -- # true 00:19:09.020 13:33:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:09.020 13:33:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:09.020 Cannot find device "nvmf_tgt_br" 00:19:09.020 13:33:14 -- nvmf/common.sh@157 -- # true 00:19:09.020 13:33:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:09.020 Cannot find device "nvmf_tgt_br2" 00:19:09.020 13:33:14 -- nvmf/common.sh@158 -- # true 00:19:09.020 13:33:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:09.020 13:33:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:09.020 13:33:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:09.020 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:09.020 13:33:14 -- nvmf/common.sh@161 -- # true 00:19:09.020 13:33:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:09.020 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:09.020 13:33:14 -- nvmf/common.sh@162 -- # true 00:19:09.020 13:33:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:09.020 13:33:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:09.278 13:33:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:09.278 13:33:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:09.278 13:33:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:09.278 13:33:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:09.278 13:33:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:09.278 13:33:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:09.278 13:33:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:09.278 13:33:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:09.278 13:33:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:09.278 13:33:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:09.278 13:33:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:09.278 13:33:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:09.278 13:33:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:09.278 13:33:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:09.278 13:33:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:09.278 13:33:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:09.278 13:33:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:09.278 13:33:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:09.278 13:33:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:09.278 13:33:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:09.278 13:33:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:09.278 13:33:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:09.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:09.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:19:09.278 00:19:09.278 --- 10.0.0.2 ping statistics --- 00:19:09.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.278 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:19:09.278 13:33:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:09.278 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:09.278 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.029 ms 00:19:09.278 00:19:09.278 --- 10.0.0.3 ping statistics --- 00:19:09.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.278 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:19:09.278 13:33:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:09.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:09.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:19:09.278 00:19:09.278 --- 10.0.0.1 ping statistics --- 00:19:09.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.279 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:19:09.279 13:33:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:09.279 13:33:14 -- nvmf/common.sh@421 -- # return 0 00:19:09.279 13:33:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:09.279 13:33:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:09.279 13:33:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:09.279 13:33:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:09.279 13:33:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:09.279 13:33:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:09.279 13:33:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:09.279 13:33:14 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:19:09.279 13:33:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:09.279 13:33:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:09.279 13:33:14 -- common/autotest_common.sh@10 -- # set +x 00:19:09.279 13:33:14 -- nvmf/common.sh@469 -- # nvmfpid=91714 00:19:09.279 13:33:14 -- nvmf/common.sh@470 -- # waitforlisten 91714 00:19:09.279 13:33:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:09.279 13:33:14 -- common/autotest_common.sh@829 -- # '[' -z 91714 ']' 00:19:09.279 13:33:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.279 13:33:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:09.279 13:33:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.279 13:33:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:09.279 13:33:14 -- common/autotest_common.sh@10 -- # set +x 00:19:09.551 [2024-12-15 13:33:14.976398] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:09.551 [2024-12-15 13:33:14.976477] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:09.551 [2024-12-15 13:33:15.104676] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:09.551 [2024-12-15 13:33:15.169534] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:09.551 [2024-12-15 13:33:15.169698] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:09.551 [2024-12-15 13:33:15.169711] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:09.551 [2024-12-15 13:33:15.169719] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:09.551 [2024-12-15 13:33:15.169870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.552 [2024-12-15 13:33:15.170005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:09.552 [2024-12-15 13:33:15.170624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:09.552 [2024-12-15 13:33:15.170671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.487 13:33:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:10.487 13:33:15 -- common/autotest_common.sh@862 -- # return 0 00:19:10.487 13:33:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:10.487 13:33:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:10.487 13:33:15 -- common/autotest_common.sh@10 -- # set +x 00:19:10.487 13:33:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:10.487 13:33:15 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:10.487 13:33:15 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:10.487 13:33:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.487 13:33:15 -- common/autotest_common.sh@10 -- # set +x 00:19:10.487 Malloc0 00:19:10.487 13:33:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.487 13:33:16 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:19:10.487 13:33:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.487 13:33:16 -- common/autotest_common.sh@10 -- # set +x 00:19:10.487 Delay0 00:19:10.487 13:33:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.487 13:33:16 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:10.487 13:33:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.488 13:33:16 -- common/autotest_common.sh@10 -- # set +x 00:19:10.488 [2024-12-15 13:33:16.023106] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:10.488 13:33:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.488 13:33:16 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:10.488 13:33:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.488 13:33:16 -- common/autotest_common.sh@10 -- # set +x 00:19:10.488 13:33:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.488 13:33:16 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:10.488 13:33:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.488 13:33:16 -- common/autotest_common.sh@10 -- # set +x 00:19:10.488 13:33:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.488 13:33:16 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:10.488 13:33:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.488 13:33:16 -- common/autotest_common.sh@10 -- # set +x 00:19:10.488 [2024-12-15 13:33:16.051271] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.488 13:33:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.488 13:33:16 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:10.746 13:33:16 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:19:10.746 13:33:16 -- common/autotest_common.sh@1187 -- # local i=0 00:19:10.746 13:33:16 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:10.746 13:33:16 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:10.746 13:33:16 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:12.676 13:33:18 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:12.676 13:33:18 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:12.676 13:33:18 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:19:12.676 13:33:18 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:12.676 13:33:18 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:12.676 13:33:18 -- common/autotest_common.sh@1197 -- # return 0 00:19:12.676 13:33:18 -- target/initiator_timeout.sh@35 -- # fio_pid=91798 00:19:12.676 13:33:18 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:19:12.676 13:33:18 -- target/initiator_timeout.sh@37 -- # sleep 3 00:19:12.676 [global] 00:19:12.676 thread=1 00:19:12.676 invalidate=1 00:19:12.676 rw=write 00:19:12.676 time_based=1 00:19:12.676 runtime=60 00:19:12.676 ioengine=libaio 00:19:12.676 direct=1 00:19:12.676 bs=4096 00:19:12.676 iodepth=1 00:19:12.676 norandommap=0 00:19:12.676 numjobs=1 00:19:12.676 00:19:12.676 verify_dump=1 00:19:12.676 verify_backlog=512 00:19:12.676 verify_state_save=0 00:19:12.676 do_verify=1 00:19:12.676 verify=crc32c-intel 00:19:12.676 [job0] 00:19:12.676 filename=/dev/nvme0n1 00:19:12.676 Could not set queue depth (nvme0n1) 00:19:12.935 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:12.935 fio-3.35 00:19:12.935 Starting 1 thread 00:19:16.221 13:33:21 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:19:16.221 13:33:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.221 13:33:21 -- common/autotest_common.sh@10 -- # set +x 00:19:16.221 true 00:19:16.221 13:33:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.221 13:33:21 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:19:16.221 13:33:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.221 13:33:21 -- common/autotest_common.sh@10 -- # set +x 00:19:16.221 true 00:19:16.221 13:33:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.221 13:33:21 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:19:16.221 13:33:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.221 13:33:21 -- common/autotest_common.sh@10 -- # set +x 00:19:16.221 true 00:19:16.221 13:33:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.221 13:33:21 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:19:16.221 13:33:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.221 13:33:21 -- common/autotest_common.sh@10 -- # set +x 00:19:16.221 true 00:19:16.221 13:33:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.221 13:33:21 -- target/initiator_timeout.sh@45 -- # sleep 3 00:19:18.754 13:33:24 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:19:18.754 13:33:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.754 13:33:24 -- common/autotest_common.sh@10 -- # set +x 00:19:18.754 true 00:19:18.754 13:33:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.754 13:33:24 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:19:18.754 13:33:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.754 13:33:24 -- common/autotest_common.sh@10 -- # set +x 00:19:18.754 true 00:19:18.754 13:33:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.754 13:33:24 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:19:18.754 13:33:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.754 13:33:24 -- common/autotest_common.sh@10 -- # set +x 00:19:18.754 true 00:19:18.754 13:33:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.754 13:33:24 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:19:18.754 13:33:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.754 13:33:24 -- common/autotest_common.sh@10 -- # set +x 00:19:18.754 true 00:19:18.754 13:33:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.754 13:33:24 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:19:18.754 13:33:24 -- target/initiator_timeout.sh@54 -- # wait 91798 00:20:14.998 00:20:14.998 job0: (groupid=0, jobs=1): err= 0: pid=91819: Sun Dec 15 13:34:18 2024 00:20:14.998 read: IOPS=750, BW=3004KiB/s (3076kB/s)(176MiB/60001msec) 00:20:14.998 slat (usec): min=12, max=16247, avg=16.89, stdev=94.76 00:20:14.998 clat (usec): min=4, max=40718k, avg=1118.03, stdev=191823.83 00:20:14.998 lat (usec): min=168, max=40718k, avg=1134.92, stdev=191823.85 00:20:14.998 clat percentiles (usec): 00:20:14.998 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 190], 00:20:14.998 | 30.00th=[ 198], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 219], 00:20:14.998 | 70.00th=[ 225], 80.00th=[ 235], 90.00th=[ 249], 95.00th=[ 265], 00:20:14.998 | 99.00th=[ 306], 99.50th=[ 322], 99.90th=[ 379], 99.95th=[ 578], 00:20:14.998 | 99.99th=[ 1549] 00:20:14.998 write: IOPS=756, BW=3025KiB/s (3097kB/s)(177MiB/60001msec); 0 zone resets 00:20:14.998 slat (usec): min=18, max=1093, avg=23.44, stdev= 9.13 00:20:14.998 clat (usec): min=92, max=7411, avg=168.47, stdev=47.06 00:20:14.998 lat (usec): min=141, max=7431, avg=191.91, stdev=48.11 00:20:14.998 clat percentiles (usec): 00:20:14.998 | 1.00th=[ 130], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 149], 00:20:14.998 | 30.00th=[ 155], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 172], 00:20:14.998 | 70.00th=[ 178], 80.00th=[ 186], 90.00th=[ 198], 95.00th=[ 212], 00:20:14.998 | 99.00th=[ 243], 99.50th=[ 253], 99.90th=[ 289], 99.95th=[ 343], 00:20:14.998 | 99.99th=[ 1385] 00:20:14.998 bw ( KiB/s): min= 7608, max=12288, per=100.00%, avg=9331.16, stdev=1114.20, samples=38 00:20:14.998 iops : min= 1902, max= 3072, avg=2332.79, stdev=278.55, samples=38 00:20:14.998 lat (usec) : 10=0.01%, 100=0.01%, 250=94.85%, 500=5.10%, 750=0.02% 00:20:14.998 lat (usec) : 1000=0.01% 00:20:14.998 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:20:14.998 cpu : usr=0.59%, sys=2.21%, ctx=90468, majf=0, minf=5 00:20:14.998 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:14.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:14.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:14.998 issued rwts: total=45056,45373,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:14.998 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:14.998 00:20:14.998 Run status group 0 (all jobs): 00:20:14.998 READ: bw=3004KiB/s (3076kB/s), 3004KiB/s-3004KiB/s (3076kB/s-3076kB/s), io=176MiB (185MB), run=60001-60001msec 00:20:14.998 WRITE: bw=3025KiB/s (3097kB/s), 3025KiB/s-3025KiB/s (3097kB/s-3097kB/s), io=177MiB (186MB), run=60001-60001msec 00:20:14.998 00:20:14.998 Disk stats (read/write): 00:20:14.998 nvme0n1: ios=45128/45056, merge=0/0, ticks=10263/8389, in_queue=18652, util=99.77% 00:20:14.998 13:34:18 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:14.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:14.998 13:34:18 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:14.998 13:34:18 -- common/autotest_common.sh@1208 -- # local i=0 00:20:14.998 13:34:18 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:20:14.998 13:34:18 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:14.998 13:34:18 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:20:14.998 13:34:18 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:14.998 nvmf hotplug test: fio successful as expected 00:20:14.998 13:34:18 -- common/autotest_common.sh@1220 -- # return 0 00:20:14.998 13:34:18 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:20:14.998 13:34:18 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:20:14.998 13:34:18 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:14.998 13:34:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.998 13:34:18 -- common/autotest_common.sh@10 -- # set +x 00:20:14.998 13:34:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.998 13:34:18 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:20:14.998 13:34:18 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:20:14.998 13:34:18 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:20:14.998 13:34:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:14.998 13:34:18 -- nvmf/common.sh@116 -- # sync 00:20:14.998 13:34:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:14.998 13:34:18 -- nvmf/common.sh@119 -- # set +e 00:20:14.998 13:34:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:14.998 13:34:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:14.998 rmmod nvme_tcp 00:20:14.998 rmmod nvme_fabrics 00:20:14.998 rmmod nvme_keyring 00:20:14.998 13:34:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:14.998 13:34:18 -- nvmf/common.sh@123 -- # set -e 00:20:14.998 13:34:18 -- nvmf/common.sh@124 -- # return 0 00:20:14.998 13:34:18 -- nvmf/common.sh@477 -- # '[' -n 91714 ']' 00:20:14.998 13:34:18 -- nvmf/common.sh@478 -- # killprocess 91714 00:20:14.998 13:34:18 -- common/autotest_common.sh@936 -- # '[' -z 91714 ']' 00:20:14.998 13:34:18 -- common/autotest_common.sh@940 -- # kill -0 91714 00:20:14.999 13:34:18 -- common/autotest_common.sh@941 -- # uname 00:20:14.999 13:34:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:14.999 13:34:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91714 00:20:14.999 killing process with pid 91714 00:20:14.999 13:34:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:14.999 13:34:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:14.999 13:34:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91714' 00:20:14.999 13:34:18 -- common/autotest_common.sh@955 -- # kill 91714 00:20:14.999 13:34:18 -- common/autotest_common.sh@960 -- # wait 91714 00:20:14.999 13:34:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:14.999 13:34:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:14.999 13:34:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:14.999 13:34:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:14.999 13:34:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:14.999 13:34:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.999 13:34:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:14.999 13:34:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.999 13:34:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:14.999 ************************************ 00:20:14.999 END TEST nvmf_initiator_timeout 00:20:14.999 ************************************ 00:20:14.999 00:20:14.999 real 1m4.766s 00:20:14.999 user 4m6.306s 00:20:14.999 sys 0m9.078s 00:20:14.999 13:34:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:14.999 13:34:19 -- common/autotest_common.sh@10 -- # set +x 00:20:14.999 13:34:19 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:20:14.999 13:34:19 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:14.999 13:34:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:14.999 13:34:19 -- common/autotest_common.sh@10 -- # set +x 00:20:14.999 13:34:19 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:14.999 13:34:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:14.999 13:34:19 -- common/autotest_common.sh@10 -- # set +x 00:20:14.999 13:34:19 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:14.999 13:34:19 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:14.999 13:34:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:14.999 13:34:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:14.999 13:34:19 -- common/autotest_common.sh@10 -- # set +x 00:20:14.999 ************************************ 00:20:14.999 START TEST nvmf_multicontroller 00:20:14.999 ************************************ 00:20:14.999 13:34:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:14.999 * Looking for test storage... 00:20:14.999 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:14.999 13:34:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:14.999 13:34:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:14.999 13:34:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:14.999 13:34:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:14.999 13:34:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:14.999 13:34:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:14.999 13:34:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:14.999 13:34:19 -- scripts/common.sh@335 -- # IFS=.-: 00:20:14.999 13:34:19 -- scripts/common.sh@335 -- # read -ra ver1 00:20:14.999 13:34:19 -- scripts/common.sh@336 -- # IFS=.-: 00:20:14.999 13:34:19 -- scripts/common.sh@336 -- # read -ra ver2 00:20:14.999 13:34:19 -- scripts/common.sh@337 -- # local 'op=<' 00:20:14.999 13:34:19 -- scripts/common.sh@339 -- # ver1_l=2 00:20:14.999 13:34:19 -- scripts/common.sh@340 -- # ver2_l=1 00:20:14.999 13:34:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:14.999 13:34:19 -- scripts/common.sh@343 -- # case "$op" in 00:20:14.999 13:34:19 -- scripts/common.sh@344 -- # : 1 00:20:14.999 13:34:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:14.999 13:34:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:14.999 13:34:19 -- scripts/common.sh@364 -- # decimal 1 00:20:14.999 13:34:19 -- scripts/common.sh@352 -- # local d=1 00:20:14.999 13:34:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:14.999 13:34:19 -- scripts/common.sh@354 -- # echo 1 00:20:14.999 13:34:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:14.999 13:34:19 -- scripts/common.sh@365 -- # decimal 2 00:20:14.999 13:34:19 -- scripts/common.sh@352 -- # local d=2 00:20:14.999 13:34:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:14.999 13:34:19 -- scripts/common.sh@354 -- # echo 2 00:20:14.999 13:34:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:14.999 13:34:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:14.999 13:34:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:14.999 13:34:19 -- scripts/common.sh@367 -- # return 0 00:20:14.999 13:34:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:14.999 13:34:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:14.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.999 --rc genhtml_branch_coverage=1 00:20:14.999 --rc genhtml_function_coverage=1 00:20:14.999 --rc genhtml_legend=1 00:20:14.999 --rc geninfo_all_blocks=1 00:20:14.999 --rc geninfo_unexecuted_blocks=1 00:20:14.999 00:20:14.999 ' 00:20:14.999 13:34:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:14.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.999 --rc genhtml_branch_coverage=1 00:20:14.999 --rc genhtml_function_coverage=1 00:20:14.999 --rc genhtml_legend=1 00:20:14.999 --rc geninfo_all_blocks=1 00:20:14.999 --rc geninfo_unexecuted_blocks=1 00:20:14.999 00:20:14.999 ' 00:20:14.999 13:34:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:14.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.999 --rc genhtml_branch_coverage=1 00:20:14.999 --rc genhtml_function_coverage=1 00:20:14.999 --rc genhtml_legend=1 00:20:14.999 --rc geninfo_all_blocks=1 00:20:14.999 --rc geninfo_unexecuted_blocks=1 00:20:14.999 00:20:14.999 ' 00:20:14.999 13:34:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:14.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.999 --rc genhtml_branch_coverage=1 00:20:14.999 --rc genhtml_function_coverage=1 00:20:14.999 --rc genhtml_legend=1 00:20:14.999 --rc geninfo_all_blocks=1 00:20:14.999 --rc geninfo_unexecuted_blocks=1 00:20:14.999 00:20:14.999 ' 00:20:14.999 13:34:19 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:14.999 13:34:19 -- nvmf/common.sh@7 -- # uname -s 00:20:14.999 13:34:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:14.999 13:34:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:14.999 13:34:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:14.999 13:34:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:14.999 13:34:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:14.999 13:34:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:14.999 13:34:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:14.999 13:34:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:14.999 13:34:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:14.999 13:34:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:14.999 13:34:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:20:14.999 13:34:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:20:14.999 13:34:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:14.999 13:34:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:14.999 13:34:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:14.999 13:34:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:14.999 13:34:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:14.999 13:34:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:14.999 13:34:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:14.999 13:34:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.999 13:34:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.999 13:34:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.999 13:34:19 -- paths/export.sh@5 -- # export PATH 00:20:15.000 13:34:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.000 13:34:19 -- nvmf/common.sh@46 -- # : 0 00:20:15.000 13:34:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:15.000 13:34:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:15.000 13:34:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:15.000 13:34:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:15.000 13:34:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:15.000 13:34:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:15.000 13:34:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:15.000 13:34:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:15.000 13:34:19 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:15.000 13:34:19 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:15.000 13:34:19 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:15.000 13:34:19 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:15.000 13:34:19 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:15.000 13:34:19 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:15.000 13:34:19 -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:15.000 13:34:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:15.000 13:34:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:15.000 13:34:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:15.000 13:34:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:15.000 13:34:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:15.000 13:34:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.000 13:34:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.000 13:34:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.000 13:34:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:15.000 13:34:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:15.000 13:34:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:15.000 13:34:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:15.000 13:34:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:15.000 13:34:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:15.000 13:34:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:15.000 13:34:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:15.000 13:34:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:15.000 13:34:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:15.000 13:34:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:15.000 13:34:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:15.000 13:34:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:15.000 13:34:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:15.000 13:34:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:15.000 13:34:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:15.000 13:34:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:15.000 13:34:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:15.000 13:34:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:15.000 13:34:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:15.000 Cannot find device "nvmf_tgt_br" 00:20:15.000 13:34:19 -- nvmf/common.sh@154 -- # true 00:20:15.000 13:34:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:15.000 Cannot find device "nvmf_tgt_br2" 00:20:15.000 13:34:19 -- nvmf/common.sh@155 -- # true 00:20:15.000 13:34:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:15.000 13:34:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:15.000 Cannot find device "nvmf_tgt_br" 00:20:15.000 13:34:19 -- nvmf/common.sh@157 -- # true 00:20:15.000 13:34:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:15.000 Cannot find device "nvmf_tgt_br2" 00:20:15.000 13:34:19 -- nvmf/common.sh@158 -- # true 00:20:15.000 13:34:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:15.000 13:34:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:15.000 13:34:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:15.000 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:15.000 13:34:19 -- nvmf/common.sh@161 -- # true 00:20:15.000 13:34:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:15.000 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:15.000 13:34:19 -- nvmf/common.sh@162 -- # true 00:20:15.000 13:34:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:15.000 13:34:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:15.000 13:34:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:15.000 13:34:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:15.000 13:34:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:15.000 13:34:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:15.000 13:34:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:15.000 13:34:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:15.000 13:34:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:15.000 13:34:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:15.000 13:34:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:15.000 13:34:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:15.000 13:34:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:15.000 13:34:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:15.000 13:34:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:15.000 13:34:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:15.000 13:34:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:15.000 13:34:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:15.000 13:34:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:15.000 13:34:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:15.000 13:34:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:15.000 13:34:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:15.000 13:34:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:15.000 13:34:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:15.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:15.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:20:15.000 00:20:15.000 --- 10.0.0.2 ping statistics --- 00:20:15.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.000 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:20:15.000 13:34:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:15.000 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:15.000 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:20:15.000 00:20:15.000 --- 10.0.0.3 ping statistics --- 00:20:15.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.000 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:20:15.000 13:34:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:15.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:15.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:20:15.000 00:20:15.000 --- 10.0.0.1 ping statistics --- 00:20:15.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.000 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:20:15.000 13:34:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:15.000 13:34:19 -- nvmf/common.sh@421 -- # return 0 00:20:15.000 13:34:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:15.000 13:34:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:15.000 13:34:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:15.000 13:34:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:15.000 13:34:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:15.000 13:34:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:15.000 13:34:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:15.000 13:34:19 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:15.000 13:34:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:15.000 13:34:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:15.000 13:34:19 -- common/autotest_common.sh@10 -- # set +x 00:20:15.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.000 13:34:19 -- nvmf/common.sh@469 -- # nvmfpid=92658 00:20:15.000 13:34:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:15.000 13:34:19 -- nvmf/common.sh@470 -- # waitforlisten 92658 00:20:15.000 13:34:19 -- common/autotest_common.sh@829 -- # '[' -z 92658 ']' 00:20:15.000 13:34:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.000 13:34:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:15.000 13:34:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.000 13:34:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:15.000 13:34:19 -- common/autotest_common.sh@10 -- # set +x 00:20:15.000 [2024-12-15 13:34:19.863095] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:15.000 [2024-12-15 13:34:19.863176] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:15.000 [2024-12-15 13:34:19.988978] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:15.001 [2024-12-15 13:34:20.071505] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:15.001 [2024-12-15 13:34:20.071933] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:15.001 [2024-12-15 13:34:20.072101] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:15.001 [2024-12-15 13:34:20.072307] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:15.001 [2024-12-15 13:34:20.072577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:15.001 [2024-12-15 13:34:20.072664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:15.001 [2024-12-15 13:34:20.072667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.259 13:34:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:15.259 13:34:20 -- common/autotest_common.sh@862 -- # return 0 00:20:15.259 13:34:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:15.259 13:34:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:15.259 13:34:20 -- common/autotest_common.sh@10 -- # set +x 00:20:15.259 13:34:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.259 13:34:20 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:15.259 13:34:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.259 13:34:20 -- common/autotest_common.sh@10 -- # set +x 00:20:15.518 [2024-12-15 13:34:20.955785] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.518 13:34:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.518 13:34:20 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:15.518 13:34:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.518 13:34:20 -- common/autotest_common.sh@10 -- # set +x 00:20:15.518 Malloc0 00:20:15.518 13:34:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.518 13:34:20 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:15.518 13:34:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.518 13:34:20 -- common/autotest_common.sh@10 -- # set +x 00:20:15.518 13:34:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.518 13:34:21 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:15.518 13:34:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.518 13:34:21 -- common/autotest_common.sh@10 -- # set +x 00:20:15.518 13:34:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.518 13:34:21 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:15.518 13:34:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.518 13:34:21 -- common/autotest_common.sh@10 -- # set +x 00:20:15.518 [2024-12-15 13:34:21.018841] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:15.518 13:34:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.518 13:34:21 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:15.518 13:34:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.518 13:34:21 -- common/autotest_common.sh@10 -- # set +x 00:20:15.518 [2024-12-15 13:34:21.026731] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:15.518 13:34:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.518 13:34:21 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:15.518 13:34:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.518 13:34:21 -- common/autotest_common.sh@10 -- # set +x 00:20:15.518 Malloc1 00:20:15.519 13:34:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.519 13:34:21 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:15.519 13:34:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.519 13:34:21 -- common/autotest_common.sh@10 -- # set +x 00:20:15.519 13:34:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.519 13:34:21 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:15.519 13:34:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.519 13:34:21 -- common/autotest_common.sh@10 -- # set +x 00:20:15.519 13:34:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.519 13:34:21 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:15.519 13:34:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.519 13:34:21 -- common/autotest_common.sh@10 -- # set +x 00:20:15.519 13:34:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.519 13:34:21 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:15.519 13:34:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.519 13:34:21 -- common/autotest_common.sh@10 -- # set +x 00:20:15.519 13:34:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.519 13:34:21 -- host/multicontroller.sh@44 -- # bdevperf_pid=92716 00:20:15.519 13:34:21 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:15.519 13:34:21 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:15.519 13:34:21 -- host/multicontroller.sh@47 -- # waitforlisten 92716 /var/tmp/bdevperf.sock 00:20:15.519 13:34:21 -- common/autotest_common.sh@829 -- # '[' -z 92716 ']' 00:20:15.519 13:34:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:15.519 13:34:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:15.519 13:34:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:15.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:15.519 13:34:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:15.519 13:34:21 -- common/autotest_common.sh@10 -- # set +x 00:20:16.897 13:34:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:16.897 13:34:22 -- common/autotest_common.sh@862 -- # return 0 00:20:16.897 13:34:22 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:16.897 13:34:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.897 13:34:22 -- common/autotest_common.sh@10 -- # set +x 00:20:16.897 NVMe0n1 00:20:16.897 13:34:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.897 13:34:22 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:16.897 13:34:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.897 13:34:22 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:16.897 13:34:22 -- common/autotest_common.sh@10 -- # set +x 00:20:16.897 13:34:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.897 1 00:20:16.897 13:34:22 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:16.897 13:34:22 -- common/autotest_common.sh@650 -- # local es=0 00:20:16.897 13:34:22 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:16.897 13:34:22 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:16.897 13:34:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.897 13:34:22 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:16.897 13:34:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.897 13:34:22 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:16.897 13:34:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.897 13:34:22 -- common/autotest_common.sh@10 -- # set +x 00:20:16.897 2024/12/15 13:34:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:16.897 request: 00:20:16.897 { 00:20:16.897 "method": "bdev_nvme_attach_controller", 00:20:16.897 "params": { 00:20:16.897 "name": "NVMe0", 00:20:16.897 "trtype": "tcp", 00:20:16.897 "traddr": "10.0.0.2", 00:20:16.897 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:16.897 "hostaddr": "10.0.0.2", 00:20:16.897 "hostsvcid": "60000", 00:20:16.897 "adrfam": "ipv4", 00:20:16.897 "trsvcid": "4420", 00:20:16.897 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:20:16.897 } 00:20:16.897 } 00:20:16.897 Got JSON-RPC error response 00:20:16.897 GoRPCClient: error on JSON-RPC call 00:20:16.897 13:34:22 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:16.897 13:34:22 -- common/autotest_common.sh@653 -- # es=1 00:20:16.897 13:34:22 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:16.897 13:34:22 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:16.897 13:34:22 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:16.897 13:34:22 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:16.897 13:34:22 -- common/autotest_common.sh@650 -- # local es=0 00:20:16.897 13:34:22 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:16.897 13:34:22 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:16.897 13:34:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.897 13:34:22 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:16.897 13:34:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.897 13:34:22 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:16.897 13:34:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.897 13:34:22 -- common/autotest_common.sh@10 -- # set +x 00:20:16.897 2024/12/15 13:34:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:16.897 request: 00:20:16.897 { 00:20:16.897 "method": "bdev_nvme_attach_controller", 00:20:16.897 "params": { 00:20:16.897 "name": "NVMe0", 00:20:16.897 "trtype": "tcp", 00:20:16.897 "traddr": "10.0.0.2", 00:20:16.897 "hostaddr": "10.0.0.2", 00:20:16.897 "hostsvcid": "60000", 00:20:16.897 "adrfam": "ipv4", 00:20:16.897 "trsvcid": "4420", 00:20:16.897 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:20:16.897 } 00:20:16.897 } 00:20:16.897 Got JSON-RPC error response 00:20:16.897 GoRPCClient: error on JSON-RPC call 00:20:16.897 13:34:22 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:16.897 13:34:22 -- common/autotest_common.sh@653 -- # es=1 00:20:16.897 13:34:22 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:16.898 13:34:22 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:16.898 13:34:22 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:16.898 13:34:22 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:16.898 13:34:22 -- common/autotest_common.sh@650 -- # local es=0 00:20:16.898 13:34:22 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:16.898 13:34:22 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:16.898 13:34:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.898 13:34:22 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:16.898 13:34:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.898 13:34:22 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:16.898 13:34:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.898 13:34:22 -- common/autotest_common.sh@10 -- # set +x 00:20:16.898 2024/12/15 13:34:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:20:16.898 request: 00:20:16.898 { 00:20:16.898 "method": "bdev_nvme_attach_controller", 00:20:16.898 "params": { 00:20:16.898 "name": "NVMe0", 00:20:16.898 "trtype": "tcp", 00:20:16.898 "traddr": "10.0.0.2", 00:20:16.898 "hostaddr": "10.0.0.2", 00:20:16.898 "hostsvcid": "60000", 00:20:16.898 "adrfam": "ipv4", 00:20:16.898 "trsvcid": "4420", 00:20:16.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.898 "multipath": "disable" 00:20:16.898 } 00:20:16.898 } 00:20:16.898 Got JSON-RPC error response 00:20:16.898 GoRPCClient: error on JSON-RPC call 00:20:16.898 13:34:22 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:16.898 13:34:22 -- common/autotest_common.sh@653 -- # es=1 00:20:16.898 13:34:22 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:16.898 13:34:22 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:16.898 13:34:22 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:16.898 13:34:22 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:16.898 13:34:22 -- common/autotest_common.sh@650 -- # local es=0 00:20:16.898 13:34:22 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:16.898 13:34:22 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:16.898 13:34:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.898 13:34:22 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:16.898 13:34:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.898 13:34:22 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:16.898 13:34:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.898 13:34:22 -- common/autotest_common.sh@10 -- # set +x 00:20:16.898 2024/12/15 13:34:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:16.898 request: 00:20:16.898 { 00:20:16.898 "method": "bdev_nvme_attach_controller", 00:20:16.898 "params": { 00:20:16.898 "name": "NVMe0", 00:20:16.898 "trtype": "tcp", 00:20:16.898 "traddr": "10.0.0.2", 00:20:16.898 "hostaddr": "10.0.0.2", 00:20:16.898 "hostsvcid": "60000", 00:20:16.898 "adrfam": "ipv4", 00:20:16.898 "trsvcid": "4420", 00:20:16.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.898 "multipath": "failover" 00:20:16.898 } 00:20:16.898 } 00:20:16.898 Got JSON-RPC error response 00:20:16.898 GoRPCClient: error on JSON-RPC call 00:20:16.898 13:34:22 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:16.898 13:34:22 -- common/autotest_common.sh@653 -- # es=1 00:20:16.898 13:34:22 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:16.898 13:34:22 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:16.898 13:34:22 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:16.898 13:34:22 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:16.898 13:34:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.898 13:34:22 -- common/autotest_common.sh@10 -- # set +x 00:20:16.898 00:20:16.898 13:34:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.898 13:34:22 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:16.898 13:34:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.898 13:34:22 -- common/autotest_common.sh@10 -- # set +x 00:20:16.898 13:34:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.898 13:34:22 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:16.898 13:34:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.898 13:34:22 -- common/autotest_common.sh@10 -- # set +x 00:20:16.898 00:20:16.898 13:34:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.898 13:34:22 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:16.898 13:34:22 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:16.898 13:34:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.898 13:34:22 -- common/autotest_common.sh@10 -- # set +x 00:20:16.898 13:34:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.898 13:34:22 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:16.898 13:34:22 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:18.275 0 00:20:18.275 13:34:23 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:18.275 13:34:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.275 13:34:23 -- common/autotest_common.sh@10 -- # set +x 00:20:18.275 13:34:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.275 13:34:23 -- host/multicontroller.sh@100 -- # killprocess 92716 00:20:18.275 13:34:23 -- common/autotest_common.sh@936 -- # '[' -z 92716 ']' 00:20:18.275 13:34:23 -- common/autotest_common.sh@940 -- # kill -0 92716 00:20:18.275 13:34:23 -- common/autotest_common.sh@941 -- # uname 00:20:18.275 13:34:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:18.275 13:34:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92716 00:20:18.275 killing process with pid 92716 00:20:18.275 13:34:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:18.275 13:34:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:18.275 13:34:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92716' 00:20:18.275 13:34:23 -- common/autotest_common.sh@955 -- # kill 92716 00:20:18.275 13:34:23 -- common/autotest_common.sh@960 -- # wait 92716 00:20:18.275 13:34:23 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:18.275 13:34:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.275 13:34:23 -- common/autotest_common.sh@10 -- # set +x 00:20:18.275 13:34:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.275 13:34:23 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:18.275 13:34:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.275 13:34:23 -- common/autotest_common.sh@10 -- # set +x 00:20:18.275 13:34:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.275 13:34:23 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:18.275 13:34:23 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:18.275 13:34:23 -- common/autotest_common.sh@1607 -- # read -r file 00:20:18.275 13:34:23 -- common/autotest_common.sh@1606 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:20:18.275 13:34:23 -- common/autotest_common.sh@1606 -- # sort -u 00:20:18.275 13:34:23 -- common/autotest_common.sh@1608 -- # cat 00:20:18.275 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:18.275 [2024-12-15 13:34:21.147782] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:18.275 [2024-12-15 13:34:21.147882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92716 ] 00:20:18.275 [2024-12-15 13:34:21.287633] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.275 [2024-12-15 13:34:21.356006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.275 [2024-12-15 13:34:22.457231] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name c43c74fe-b417-459b-a29d-ea30a19be8be already exists 00:20:18.275 [2024-12-15 13:34:22.457277] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:c43c74fe-b417-459b-a29d-ea30a19be8be alias for bdev NVMe1n1 00:20:18.275 [2024-12-15 13:34:22.457314] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:18.275 Running I/O for 1 seconds... 00:20:18.275 00:20:18.275 Latency(us) 00:20:18.275 [2024-12-15T13:34:23.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.275 [2024-12-15T13:34:23.965Z] Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:18.275 NVMe0n1 : 1.00 22811.00 89.11 0.00 0.00 5598.25 2934.23 10604.92 00:20:18.275 [2024-12-15T13:34:23.965Z] =================================================================================================================== 00:20:18.275 [2024-12-15T13:34:23.965Z] Total : 22811.00 89.11 0.00 0.00 5598.25 2934.23 10604.92 00:20:18.275 Received shutdown signal, test time was about 1.000000 seconds 00:20:18.275 00:20:18.275 Latency(us) 00:20:18.275 [2024-12-15T13:34:23.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.275 [2024-12-15T13:34:23.965Z] =================================================================================================================== 00:20:18.275 [2024-12-15T13:34:23.965Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:18.275 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:18.275 13:34:23 -- common/autotest_common.sh@1613 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:18.275 13:34:23 -- common/autotest_common.sh@1607 -- # read -r file 00:20:18.275 13:34:23 -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:18.275 13:34:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:18.275 13:34:23 -- nvmf/common.sh@116 -- # sync 00:20:18.534 13:34:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:18.534 13:34:23 -- nvmf/common.sh@119 -- # set +e 00:20:18.534 13:34:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:18.534 13:34:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:18.534 rmmod nvme_tcp 00:20:18.534 rmmod nvme_fabrics 00:20:18.534 rmmod nvme_keyring 00:20:18.534 13:34:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:18.534 13:34:24 -- nvmf/common.sh@123 -- # set -e 00:20:18.534 13:34:24 -- nvmf/common.sh@124 -- # return 0 00:20:18.534 13:34:24 -- nvmf/common.sh@477 -- # '[' -n 92658 ']' 00:20:18.534 13:34:24 -- nvmf/common.sh@478 -- # killprocess 92658 00:20:18.534 13:34:24 -- common/autotest_common.sh@936 -- # '[' -z 92658 ']' 00:20:18.534 13:34:24 -- common/autotest_common.sh@940 -- # kill -0 92658 00:20:18.534 13:34:24 -- common/autotest_common.sh@941 -- # uname 00:20:18.534 13:34:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:18.534 13:34:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92658 00:20:18.534 13:34:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:18.534 13:34:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:18.534 killing process with pid 92658 00:20:18.534 13:34:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92658' 00:20:18.534 13:34:24 -- common/autotest_common.sh@955 -- # kill 92658 00:20:18.534 13:34:24 -- common/autotest_common.sh@960 -- # wait 92658 00:20:18.792 13:34:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:18.792 13:34:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:18.792 13:34:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:18.793 13:34:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:18.793 13:34:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:18.793 13:34:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.793 13:34:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:18.793 13:34:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.793 13:34:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:18.793 ************************************ 00:20:18.793 END TEST nvmf_multicontroller 00:20:18.793 ************************************ 00:20:18.793 00:20:18.793 real 0m5.088s 00:20:18.793 user 0m15.994s 00:20:18.793 sys 0m1.126s 00:20:18.793 13:34:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:18.793 13:34:24 -- common/autotest_common.sh@10 -- # set +x 00:20:18.793 13:34:24 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:18.793 13:34:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:18.793 13:34:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:18.793 13:34:24 -- common/autotest_common.sh@10 -- # set +x 00:20:18.793 ************************************ 00:20:18.793 START TEST nvmf_aer 00:20:18.793 ************************************ 00:20:18.793 13:34:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:18.793 * Looking for test storage... 00:20:18.793 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:18.793 13:34:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:18.793 13:34:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:18.793 13:34:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:19.052 13:34:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:19.052 13:34:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:19.052 13:34:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:19.052 13:34:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:19.052 13:34:24 -- scripts/common.sh@335 -- # IFS=.-: 00:20:19.052 13:34:24 -- scripts/common.sh@335 -- # read -ra ver1 00:20:19.052 13:34:24 -- scripts/common.sh@336 -- # IFS=.-: 00:20:19.052 13:34:24 -- scripts/common.sh@336 -- # read -ra ver2 00:20:19.052 13:34:24 -- scripts/common.sh@337 -- # local 'op=<' 00:20:19.052 13:34:24 -- scripts/common.sh@339 -- # ver1_l=2 00:20:19.052 13:34:24 -- scripts/common.sh@340 -- # ver2_l=1 00:20:19.052 13:34:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:19.052 13:34:24 -- scripts/common.sh@343 -- # case "$op" in 00:20:19.052 13:34:24 -- scripts/common.sh@344 -- # : 1 00:20:19.052 13:34:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:19.052 13:34:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:19.052 13:34:24 -- scripts/common.sh@364 -- # decimal 1 00:20:19.052 13:34:24 -- scripts/common.sh@352 -- # local d=1 00:20:19.052 13:34:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:19.052 13:34:24 -- scripts/common.sh@354 -- # echo 1 00:20:19.052 13:34:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:19.052 13:34:24 -- scripts/common.sh@365 -- # decimal 2 00:20:19.052 13:34:24 -- scripts/common.sh@352 -- # local d=2 00:20:19.052 13:34:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:19.052 13:34:24 -- scripts/common.sh@354 -- # echo 2 00:20:19.052 13:34:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:19.052 13:34:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:19.052 13:34:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:19.052 13:34:24 -- scripts/common.sh@367 -- # return 0 00:20:19.052 13:34:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:19.053 13:34:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:19.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.053 --rc genhtml_branch_coverage=1 00:20:19.053 --rc genhtml_function_coverage=1 00:20:19.053 --rc genhtml_legend=1 00:20:19.053 --rc geninfo_all_blocks=1 00:20:19.053 --rc geninfo_unexecuted_blocks=1 00:20:19.053 00:20:19.053 ' 00:20:19.053 13:34:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:19.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.053 --rc genhtml_branch_coverage=1 00:20:19.053 --rc genhtml_function_coverage=1 00:20:19.053 --rc genhtml_legend=1 00:20:19.053 --rc geninfo_all_blocks=1 00:20:19.053 --rc geninfo_unexecuted_blocks=1 00:20:19.053 00:20:19.053 ' 00:20:19.053 13:34:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:19.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.053 --rc genhtml_branch_coverage=1 00:20:19.053 --rc genhtml_function_coverage=1 00:20:19.053 --rc genhtml_legend=1 00:20:19.053 --rc geninfo_all_blocks=1 00:20:19.053 --rc geninfo_unexecuted_blocks=1 00:20:19.053 00:20:19.053 ' 00:20:19.053 13:34:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:19.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.053 --rc genhtml_branch_coverage=1 00:20:19.053 --rc genhtml_function_coverage=1 00:20:19.053 --rc genhtml_legend=1 00:20:19.053 --rc geninfo_all_blocks=1 00:20:19.053 --rc geninfo_unexecuted_blocks=1 00:20:19.053 00:20:19.053 ' 00:20:19.053 13:34:24 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:19.053 13:34:24 -- nvmf/common.sh@7 -- # uname -s 00:20:19.053 13:34:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:19.053 13:34:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:19.053 13:34:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:19.053 13:34:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:19.053 13:34:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:19.053 13:34:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:19.053 13:34:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:19.053 13:34:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:19.053 13:34:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:19.053 13:34:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:19.053 13:34:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:20:19.053 13:34:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:20:19.053 13:34:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:19.053 13:34:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:19.053 13:34:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:19.053 13:34:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:19.053 13:34:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:19.053 13:34:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:19.053 13:34:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:19.053 13:34:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.053 13:34:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.053 13:34:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.053 13:34:24 -- paths/export.sh@5 -- # export PATH 00:20:19.053 13:34:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.053 13:34:24 -- nvmf/common.sh@46 -- # : 0 00:20:19.053 13:34:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:19.053 13:34:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:19.053 13:34:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:19.053 13:34:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:19.053 13:34:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:19.053 13:34:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:19.053 13:34:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:19.053 13:34:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:19.053 13:34:24 -- host/aer.sh@11 -- # nvmftestinit 00:20:19.053 13:34:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:19.053 13:34:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:19.053 13:34:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:19.053 13:34:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:19.053 13:34:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:19.053 13:34:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.053 13:34:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:19.053 13:34:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.053 13:34:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:19.053 13:34:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:19.053 13:34:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:19.053 13:34:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:19.053 13:34:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:19.053 13:34:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:19.053 13:34:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:19.053 13:34:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:19.053 13:34:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:19.053 13:34:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:19.053 13:34:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:19.053 13:34:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:19.053 13:34:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:19.053 13:34:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:19.053 13:34:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:19.053 13:34:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:19.053 13:34:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:19.053 13:34:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:19.053 13:34:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:19.053 13:34:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:19.053 Cannot find device "nvmf_tgt_br" 00:20:19.053 13:34:24 -- nvmf/common.sh@154 -- # true 00:20:19.053 13:34:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:19.053 Cannot find device "nvmf_tgt_br2" 00:20:19.053 13:34:24 -- nvmf/common.sh@155 -- # true 00:20:19.053 13:34:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:19.053 13:34:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:19.053 Cannot find device "nvmf_tgt_br" 00:20:19.053 13:34:24 -- nvmf/common.sh@157 -- # true 00:20:19.053 13:34:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:19.053 Cannot find device "nvmf_tgt_br2" 00:20:19.053 13:34:24 -- nvmf/common.sh@158 -- # true 00:20:19.053 13:34:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:19.053 13:34:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:19.053 13:34:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:19.053 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:19.053 13:34:24 -- nvmf/common.sh@161 -- # true 00:20:19.053 13:34:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:19.053 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:19.053 13:34:24 -- nvmf/common.sh@162 -- # true 00:20:19.053 13:34:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:19.053 13:34:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:19.053 13:34:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:19.053 13:34:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:19.053 13:34:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:19.053 13:34:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:19.313 13:34:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:19.313 13:34:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:19.313 13:34:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:19.313 13:34:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:19.313 13:34:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:19.313 13:34:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:19.313 13:34:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:19.313 13:34:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:19.313 13:34:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:19.313 13:34:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:19.313 13:34:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:19.313 13:34:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:19.313 13:34:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:19.313 13:34:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:19.313 13:34:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:19.313 13:34:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:19.313 13:34:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:19.313 13:34:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:19.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:19.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:20:19.313 00:20:19.313 --- 10.0.0.2 ping statistics --- 00:20:19.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.313 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:20:19.313 13:34:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:19.313 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:19.313 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:20:19.313 00:20:19.313 --- 10.0.0.3 ping statistics --- 00:20:19.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.313 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:20:19.313 13:34:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:19.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:19.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:20:19.313 00:20:19.313 --- 10.0.0.1 ping statistics --- 00:20:19.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.313 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:20:19.313 13:34:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:19.313 13:34:24 -- nvmf/common.sh@421 -- # return 0 00:20:19.313 13:34:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:19.313 13:34:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:19.313 13:34:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:19.313 13:34:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:19.313 13:34:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:19.313 13:34:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:19.313 13:34:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:19.313 13:34:24 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:19.313 13:34:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:19.313 13:34:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:19.313 13:34:24 -- common/autotest_common.sh@10 -- # set +x 00:20:19.313 13:34:24 -- nvmf/common.sh@469 -- # nvmfpid=92966 00:20:19.313 13:34:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:19.313 13:34:24 -- nvmf/common.sh@470 -- # waitforlisten 92966 00:20:19.313 13:34:24 -- common/autotest_common.sh@829 -- # '[' -z 92966 ']' 00:20:19.313 13:34:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.313 13:34:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:19.313 13:34:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.313 13:34:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:19.313 13:34:24 -- common/autotest_common.sh@10 -- # set +x 00:20:19.313 [2024-12-15 13:34:24.969130] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:19.313 [2024-12-15 13:34:24.969254] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.574 [2024-12-15 13:34:25.109619] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:19.574 [2024-12-15 13:34:25.174803] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:19.574 [2024-12-15 13:34:25.174957] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.574 [2024-12-15 13:34:25.174969] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.574 [2024-12-15 13:34:25.174977] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.574 [2024-12-15 13:34:25.175561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.575 [2024-12-15 13:34:25.175789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.575 [2024-12-15 13:34:25.176397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:19.575 [2024-12-15 13:34:25.176454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.511 13:34:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:20.511 13:34:25 -- common/autotest_common.sh@862 -- # return 0 00:20:20.511 13:34:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:20.511 13:34:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:20.511 13:34:25 -- common/autotest_common.sh@10 -- # set +x 00:20:20.511 13:34:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.511 13:34:25 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:20.511 13:34:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.511 13:34:25 -- common/autotest_common.sh@10 -- # set +x 00:20:20.511 [2024-12-15 13:34:25.957318] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.511 13:34:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.511 13:34:25 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:20.511 13:34:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.511 13:34:25 -- common/autotest_common.sh@10 -- # set +x 00:20:20.511 Malloc0 00:20:20.511 13:34:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.511 13:34:26 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:20.511 13:34:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.511 13:34:26 -- common/autotest_common.sh@10 -- # set +x 00:20:20.511 13:34:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.511 13:34:26 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:20.511 13:34:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.511 13:34:26 -- common/autotest_common.sh@10 -- # set +x 00:20:20.511 13:34:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.511 13:34:26 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:20.511 13:34:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.511 13:34:26 -- common/autotest_common.sh@10 -- # set +x 00:20:20.511 [2024-12-15 13:34:26.023065] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:20.511 13:34:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.511 13:34:26 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:20.511 13:34:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.511 13:34:26 -- common/autotest_common.sh@10 -- # set +x 00:20:20.511 [2024-12-15 13:34:26.030847] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:20.511 [ 00:20:20.511 { 00:20:20.511 "allow_any_host": true, 00:20:20.511 "hosts": [], 00:20:20.511 "listen_addresses": [], 00:20:20.511 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:20.511 "subtype": "Discovery" 00:20:20.511 }, 00:20:20.511 { 00:20:20.511 "allow_any_host": true, 00:20:20.511 "hosts": [], 00:20:20.511 "listen_addresses": [ 00:20:20.511 { 00:20:20.511 "adrfam": "IPv4", 00:20:20.511 "traddr": "10.0.0.2", 00:20:20.511 "transport": "TCP", 00:20:20.511 "trsvcid": "4420", 00:20:20.511 "trtype": "TCP" 00:20:20.511 } 00:20:20.511 ], 00:20:20.511 "max_cntlid": 65519, 00:20:20.511 "max_namespaces": 2, 00:20:20.511 "min_cntlid": 1, 00:20:20.511 "model_number": "SPDK bdev Controller", 00:20:20.511 "namespaces": [ 00:20:20.511 { 00:20:20.511 "bdev_name": "Malloc0", 00:20:20.511 "name": "Malloc0", 00:20:20.511 "nguid": "F91FE007B5934B0697576530BD97A77E", 00:20:20.511 "nsid": 1, 00:20:20.511 "uuid": "f91fe007-b593-4b06-9757-6530bd97a77e" 00:20:20.511 } 00:20:20.511 ], 00:20:20.511 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.511 "serial_number": "SPDK00000000000001", 00:20:20.511 "subtype": "NVMe" 00:20:20.511 } 00:20:20.511 ] 00:20:20.511 13:34:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.511 13:34:26 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:20.511 13:34:26 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:20.511 13:34:26 -- host/aer.sh@33 -- # aerpid=93020 00:20:20.511 13:34:26 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:20.511 13:34:26 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:20.511 13:34:26 -- common/autotest_common.sh@1254 -- # local i=0 00:20:20.511 13:34:26 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:20.511 13:34:26 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:20:20.511 13:34:26 -- common/autotest_common.sh@1257 -- # i=1 00:20:20.511 13:34:26 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:20:20.511 13:34:26 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:20.511 13:34:26 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:20:20.511 13:34:26 -- common/autotest_common.sh@1257 -- # i=2 00:20:20.512 13:34:26 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:20:20.770 13:34:26 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:20.770 13:34:26 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:20.770 13:34:26 -- common/autotest_common.sh@1265 -- # return 0 00:20:20.770 13:34:26 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:20.770 13:34:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.771 13:34:26 -- common/autotest_common.sh@10 -- # set +x 00:20:20.771 Malloc1 00:20:20.771 13:34:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.771 13:34:26 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:20.771 13:34:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.771 13:34:26 -- common/autotest_common.sh@10 -- # set +x 00:20:20.771 13:34:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.771 13:34:26 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:20.771 13:34:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.771 13:34:26 -- common/autotest_common.sh@10 -- # set +x 00:20:20.771 [ 00:20:20.771 { 00:20:20.771 "allow_any_host": true, 00:20:20.771 "hosts": [], 00:20:20.771 "listen_addresses": [], 00:20:20.771 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:20.771 "subtype": "Discovery" 00:20:20.771 }, 00:20:20.771 { 00:20:20.771 "allow_any_host": true, 00:20:20.771 "hosts": [], 00:20:20.771 "listen_addresses": [ 00:20:20.771 { 00:20:20.771 "adrfam": "IPv4", 00:20:20.771 "traddr": "10.0.0.2", 00:20:20.771 "transport": "TCP", 00:20:20.771 "trsvcid": "4420", 00:20:20.771 "trtype": "TCP" 00:20:20.771 } 00:20:20.771 ], 00:20:20.771 "max_cntlid": 65519, 00:20:20.771 "max_namespaces": 2, 00:20:20.771 "min_cntlid": 1, 00:20:20.771 Asynchronous Event Request test 00:20:20.771 Attaching to 10.0.0.2 00:20:20.771 Attached to 10.0.0.2 00:20:20.771 Registering asynchronous event callbacks... 00:20:20.771 Starting namespace attribute notice tests for all controllers... 00:20:20.771 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:20.771 aer_cb - Changed Namespace 00:20:20.771 Cleaning up... 00:20:20.771 "model_number": "SPDK bdev Controller", 00:20:20.771 "namespaces": [ 00:20:20.771 { 00:20:20.771 "bdev_name": "Malloc0", 00:20:20.771 "name": "Malloc0", 00:20:20.771 "nguid": "F91FE007B5934B0697576530BD97A77E", 00:20:20.771 "nsid": 1, 00:20:20.771 "uuid": "f91fe007-b593-4b06-9757-6530bd97a77e" 00:20:20.771 }, 00:20:20.771 { 00:20:20.771 "bdev_name": "Malloc1", 00:20:20.771 "name": "Malloc1", 00:20:20.771 "nguid": "DB9581AE5BCB494B87FFE043AA7CA0A1", 00:20:20.771 "nsid": 2, 00:20:20.771 "uuid": "db9581ae-5bcb-494b-87ff-e043aa7ca0a1" 00:20:20.771 } 00:20:20.771 ], 00:20:20.771 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.771 "serial_number": "SPDK00000000000001", 00:20:20.771 "subtype": "NVMe" 00:20:20.771 } 00:20:20.771 ] 00:20:20.771 13:34:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.771 13:34:26 -- host/aer.sh@43 -- # wait 93020 00:20:20.771 13:34:26 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:20.771 13:34:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.771 13:34:26 -- common/autotest_common.sh@10 -- # set +x 00:20:20.771 13:34:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.771 13:34:26 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:20.771 13:34:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.771 13:34:26 -- common/autotest_common.sh@10 -- # set +x 00:20:20.771 13:34:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.771 13:34:26 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:20.771 13:34:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.771 13:34:26 -- common/autotest_common.sh@10 -- # set +x 00:20:20.771 13:34:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.771 13:34:26 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:20.771 13:34:26 -- host/aer.sh@51 -- # nvmftestfini 00:20:20.771 13:34:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:20.771 13:34:26 -- nvmf/common.sh@116 -- # sync 00:20:21.106 13:34:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:21.106 13:34:26 -- nvmf/common.sh@119 -- # set +e 00:20:21.106 13:34:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:21.106 13:34:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:21.106 rmmod nvme_tcp 00:20:21.106 rmmod nvme_fabrics 00:20:21.106 rmmod nvme_keyring 00:20:21.106 13:34:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:21.106 13:34:26 -- nvmf/common.sh@123 -- # set -e 00:20:21.106 13:34:26 -- nvmf/common.sh@124 -- # return 0 00:20:21.106 13:34:26 -- nvmf/common.sh@477 -- # '[' -n 92966 ']' 00:20:21.106 13:34:26 -- nvmf/common.sh@478 -- # killprocess 92966 00:20:21.106 13:34:26 -- common/autotest_common.sh@936 -- # '[' -z 92966 ']' 00:20:21.106 13:34:26 -- common/autotest_common.sh@940 -- # kill -0 92966 00:20:21.106 13:34:26 -- common/autotest_common.sh@941 -- # uname 00:20:21.106 13:34:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:21.106 13:34:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92966 00:20:21.106 13:34:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:21.106 killing process with pid 92966 00:20:21.106 13:34:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:21.106 13:34:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92966' 00:20:21.106 13:34:26 -- common/autotest_common.sh@955 -- # kill 92966 00:20:21.106 [2024-12-15 13:34:26.573419] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:21.106 13:34:26 -- common/autotest_common.sh@960 -- # wait 92966 00:20:21.106 13:34:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:21.106 13:34:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:21.106 13:34:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:21.106 13:34:26 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:21.106 13:34:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:21.106 13:34:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.107 13:34:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:21.107 13:34:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.365 13:34:26 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:21.365 00:20:21.365 real 0m2.417s 00:20:21.365 user 0m6.611s 00:20:21.365 sys 0m0.684s 00:20:21.365 13:34:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:21.365 13:34:26 -- common/autotest_common.sh@10 -- # set +x 00:20:21.365 ************************************ 00:20:21.365 END TEST nvmf_aer 00:20:21.365 ************************************ 00:20:21.365 13:34:26 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:21.365 13:34:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:21.365 13:34:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:21.365 13:34:26 -- common/autotest_common.sh@10 -- # set +x 00:20:21.365 ************************************ 00:20:21.365 START TEST nvmf_async_init 00:20:21.365 ************************************ 00:20:21.366 13:34:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:21.366 * Looking for test storage... 00:20:21.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:21.366 13:34:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:21.366 13:34:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:21.366 13:34:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:21.366 13:34:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:21.366 13:34:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:21.366 13:34:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:21.366 13:34:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:21.366 13:34:27 -- scripts/common.sh@335 -- # IFS=.-: 00:20:21.366 13:34:27 -- scripts/common.sh@335 -- # read -ra ver1 00:20:21.366 13:34:27 -- scripts/common.sh@336 -- # IFS=.-: 00:20:21.366 13:34:27 -- scripts/common.sh@336 -- # read -ra ver2 00:20:21.366 13:34:27 -- scripts/common.sh@337 -- # local 'op=<' 00:20:21.366 13:34:27 -- scripts/common.sh@339 -- # ver1_l=2 00:20:21.366 13:34:27 -- scripts/common.sh@340 -- # ver2_l=1 00:20:21.366 13:34:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:21.366 13:34:27 -- scripts/common.sh@343 -- # case "$op" in 00:20:21.366 13:34:27 -- scripts/common.sh@344 -- # : 1 00:20:21.366 13:34:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:21.366 13:34:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:21.366 13:34:27 -- scripts/common.sh@364 -- # decimal 1 00:20:21.366 13:34:27 -- scripts/common.sh@352 -- # local d=1 00:20:21.366 13:34:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:21.366 13:34:27 -- scripts/common.sh@354 -- # echo 1 00:20:21.366 13:34:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:21.366 13:34:27 -- scripts/common.sh@365 -- # decimal 2 00:20:21.366 13:34:27 -- scripts/common.sh@352 -- # local d=2 00:20:21.366 13:34:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:21.366 13:34:27 -- scripts/common.sh@354 -- # echo 2 00:20:21.366 13:34:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:21.366 13:34:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:21.366 13:34:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:21.366 13:34:27 -- scripts/common.sh@367 -- # return 0 00:20:21.366 13:34:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:21.366 13:34:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:21.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.366 --rc genhtml_branch_coverage=1 00:20:21.366 --rc genhtml_function_coverage=1 00:20:21.366 --rc genhtml_legend=1 00:20:21.366 --rc geninfo_all_blocks=1 00:20:21.366 --rc geninfo_unexecuted_blocks=1 00:20:21.366 00:20:21.366 ' 00:20:21.366 13:34:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:21.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.366 --rc genhtml_branch_coverage=1 00:20:21.366 --rc genhtml_function_coverage=1 00:20:21.366 --rc genhtml_legend=1 00:20:21.366 --rc geninfo_all_blocks=1 00:20:21.366 --rc geninfo_unexecuted_blocks=1 00:20:21.366 00:20:21.366 ' 00:20:21.366 13:34:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:21.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.366 --rc genhtml_branch_coverage=1 00:20:21.366 --rc genhtml_function_coverage=1 00:20:21.366 --rc genhtml_legend=1 00:20:21.366 --rc geninfo_all_blocks=1 00:20:21.366 --rc geninfo_unexecuted_blocks=1 00:20:21.366 00:20:21.366 ' 00:20:21.366 13:34:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:21.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.366 --rc genhtml_branch_coverage=1 00:20:21.366 --rc genhtml_function_coverage=1 00:20:21.366 --rc genhtml_legend=1 00:20:21.366 --rc geninfo_all_blocks=1 00:20:21.366 --rc geninfo_unexecuted_blocks=1 00:20:21.366 00:20:21.366 ' 00:20:21.366 13:34:27 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:21.366 13:34:27 -- nvmf/common.sh@7 -- # uname -s 00:20:21.366 13:34:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:21.366 13:34:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:21.366 13:34:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:21.366 13:34:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:21.366 13:34:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:21.366 13:34:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:21.366 13:34:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:21.366 13:34:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:21.366 13:34:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:21.366 13:34:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:21.366 13:34:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:20:21.366 13:34:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:20:21.366 13:34:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:21.366 13:34:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:21.366 13:34:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:21.366 13:34:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:21.366 13:34:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:21.366 13:34:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:21.366 13:34:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:21.366 13:34:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.366 13:34:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.366 13:34:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.366 13:34:27 -- paths/export.sh@5 -- # export PATH 00:20:21.366 13:34:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.366 13:34:27 -- nvmf/common.sh@46 -- # : 0 00:20:21.366 13:34:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:21.366 13:34:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:21.366 13:34:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:21.366 13:34:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:21.366 13:34:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:21.366 13:34:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:21.366 13:34:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:21.366 13:34:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:21.366 13:34:27 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:21.366 13:34:27 -- host/async_init.sh@14 -- # null_block_size=512 00:20:21.366 13:34:27 -- host/async_init.sh@15 -- # null_bdev=null0 00:20:21.366 13:34:27 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:21.366 13:34:27 -- host/async_init.sh@20 -- # uuidgen 00:20:21.366 13:34:27 -- host/async_init.sh@20 -- # tr -d - 00:20:21.366 13:34:27 -- host/async_init.sh@20 -- # nguid=5f98e193154d4fbab75b6847e660f292 00:20:21.366 13:34:27 -- host/async_init.sh@22 -- # nvmftestinit 00:20:21.366 13:34:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:21.366 13:34:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:21.366 13:34:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:21.366 13:34:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:21.366 13:34:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:21.366 13:34:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.366 13:34:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:21.366 13:34:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.366 13:34:27 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:21.366 13:34:27 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:21.366 13:34:27 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:21.366 13:34:27 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:21.366 13:34:27 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:21.366 13:34:27 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:21.366 13:34:27 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:21.366 13:34:27 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:21.366 13:34:27 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:21.366 13:34:27 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:21.366 13:34:27 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:21.366 13:34:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:21.366 13:34:27 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:21.366 13:34:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:21.366 13:34:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:21.366 13:34:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:21.366 13:34:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:21.366 13:34:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:21.366 13:34:27 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:21.625 13:34:27 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:21.625 Cannot find device "nvmf_tgt_br" 00:20:21.625 13:34:27 -- nvmf/common.sh@154 -- # true 00:20:21.625 13:34:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:21.625 Cannot find device "nvmf_tgt_br2" 00:20:21.625 13:34:27 -- nvmf/common.sh@155 -- # true 00:20:21.625 13:34:27 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:21.625 13:34:27 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:21.625 Cannot find device "nvmf_tgt_br" 00:20:21.625 13:34:27 -- nvmf/common.sh@157 -- # true 00:20:21.625 13:34:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:21.625 Cannot find device "nvmf_tgt_br2" 00:20:21.625 13:34:27 -- nvmf/common.sh@158 -- # true 00:20:21.625 13:34:27 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:21.625 13:34:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:21.625 13:34:27 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:21.625 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:21.625 13:34:27 -- nvmf/common.sh@161 -- # true 00:20:21.625 13:34:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:21.625 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:21.625 13:34:27 -- nvmf/common.sh@162 -- # true 00:20:21.625 13:34:27 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:21.625 13:34:27 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:21.625 13:34:27 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:21.625 13:34:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:21.625 13:34:27 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:21.625 13:34:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:21.625 13:34:27 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:21.625 13:34:27 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:21.625 13:34:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:21.625 13:34:27 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:21.625 13:34:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:21.625 13:34:27 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:21.625 13:34:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:21.625 13:34:27 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:21.625 13:34:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:21.884 13:34:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:21.884 13:34:27 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:21.884 13:34:27 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:21.884 13:34:27 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:21.884 13:34:27 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:21.884 13:34:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:21.884 13:34:27 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:21.884 13:34:27 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:21.884 13:34:27 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:21.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:21.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:20:21.884 00:20:21.884 --- 10.0.0.2 ping statistics --- 00:20:21.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.884 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:20:21.884 13:34:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:21.884 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:21.884 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:20:21.884 00:20:21.884 --- 10.0.0.3 ping statistics --- 00:20:21.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.884 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:20:21.884 13:34:27 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:21.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:21.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:21.884 00:20:21.884 --- 10.0.0.1 ping statistics --- 00:20:21.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.884 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:21.884 13:34:27 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:21.884 13:34:27 -- nvmf/common.sh@421 -- # return 0 00:20:21.884 13:34:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:21.884 13:34:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:21.884 13:34:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:21.884 13:34:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:21.884 13:34:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:21.884 13:34:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:21.884 13:34:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:21.884 13:34:27 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:21.884 13:34:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:21.884 13:34:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:21.884 13:34:27 -- common/autotest_common.sh@10 -- # set +x 00:20:21.884 13:34:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:21.884 13:34:27 -- nvmf/common.sh@469 -- # nvmfpid=93201 00:20:21.884 13:34:27 -- nvmf/common.sh@470 -- # waitforlisten 93201 00:20:21.884 13:34:27 -- common/autotest_common.sh@829 -- # '[' -z 93201 ']' 00:20:21.884 13:34:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.884 13:34:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:21.884 13:34:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.884 13:34:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:21.884 13:34:27 -- common/autotest_common.sh@10 -- # set +x 00:20:21.884 [2024-12-15 13:34:27.474738] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:21.884 [2024-12-15 13:34:27.474820] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.143 [2024-12-15 13:34:27.607039] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.143 [2024-12-15 13:34:27.666852] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:22.143 [2024-12-15 13:34:27.667049] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.143 [2024-12-15 13:34:27.667062] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.143 [2024-12-15 13:34:27.667071] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.143 [2024-12-15 13:34:27.667099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.078 13:34:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:23.078 13:34:28 -- common/autotest_common.sh@862 -- # return 0 00:20:23.078 13:34:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:23.078 13:34:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:23.078 13:34:28 -- common/autotest_common.sh@10 -- # set +x 00:20:23.078 13:34:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:23.078 13:34:28 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:23.078 13:34:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.078 13:34:28 -- common/autotest_common.sh@10 -- # set +x 00:20:23.078 [2024-12-15 13:34:28.558860] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.078 13:34:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.078 13:34:28 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:23.078 13:34:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.078 13:34:28 -- common/autotest_common.sh@10 -- # set +x 00:20:23.078 null0 00:20:23.078 13:34:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.078 13:34:28 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:23.078 13:34:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.078 13:34:28 -- common/autotest_common.sh@10 -- # set +x 00:20:23.078 13:34:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.078 13:34:28 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:23.078 13:34:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.078 13:34:28 -- common/autotest_common.sh@10 -- # set +x 00:20:23.078 13:34:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.078 13:34:28 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 5f98e193154d4fbab75b6847e660f292 00:20:23.078 13:34:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.078 13:34:28 -- common/autotest_common.sh@10 -- # set +x 00:20:23.078 13:34:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.078 13:34:28 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:23.078 13:34:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.078 13:34:28 -- common/autotest_common.sh@10 -- # set +x 00:20:23.078 [2024-12-15 13:34:28.599415] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.078 13:34:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.078 13:34:28 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:23.078 13:34:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.078 13:34:28 -- common/autotest_common.sh@10 -- # set +x 00:20:23.338 nvme0n1 00:20:23.338 13:34:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.338 13:34:28 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:23.338 13:34:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.338 13:34:28 -- common/autotest_common.sh@10 -- # set +x 00:20:23.338 [ 00:20:23.338 { 00:20:23.338 "aliases": [ 00:20:23.338 "5f98e193-154d-4fba-b75b-6847e660f292" 00:20:23.338 ], 00:20:23.338 "assigned_rate_limits": { 00:20:23.338 "r_mbytes_per_sec": 0, 00:20:23.338 "rw_ios_per_sec": 0, 00:20:23.338 "rw_mbytes_per_sec": 0, 00:20:23.338 "w_mbytes_per_sec": 0 00:20:23.338 }, 00:20:23.338 "block_size": 512, 00:20:23.338 "claimed": false, 00:20:23.338 "driver_specific": { 00:20:23.338 "mp_policy": "active_passive", 00:20:23.338 "nvme": [ 00:20:23.338 { 00:20:23.338 "ctrlr_data": { 00:20:23.338 "ana_reporting": false, 00:20:23.338 "cntlid": 1, 00:20:23.338 "firmware_revision": "24.01.1", 00:20:23.338 "model_number": "SPDK bdev Controller", 00:20:23.338 "multi_ctrlr": true, 00:20:23.338 "oacs": { 00:20:23.338 "firmware": 0, 00:20:23.338 "format": 0, 00:20:23.338 "ns_manage": 0, 00:20:23.338 "security": 0 00:20:23.338 }, 00:20:23.338 "serial_number": "00000000000000000000", 00:20:23.338 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:23.338 "vendor_id": "0x8086" 00:20:23.338 }, 00:20:23.338 "ns_data": { 00:20:23.338 "can_share": true, 00:20:23.338 "id": 1 00:20:23.338 }, 00:20:23.338 "trid": { 00:20:23.338 "adrfam": "IPv4", 00:20:23.338 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:23.338 "traddr": "10.0.0.2", 00:20:23.338 "trsvcid": "4420", 00:20:23.338 "trtype": "TCP" 00:20:23.338 }, 00:20:23.338 "vs": { 00:20:23.338 "nvme_version": "1.3" 00:20:23.338 } 00:20:23.338 } 00:20:23.338 ] 00:20:23.338 }, 00:20:23.338 "name": "nvme0n1", 00:20:23.338 "num_blocks": 2097152, 00:20:23.338 "product_name": "NVMe disk", 00:20:23.338 "supported_io_types": { 00:20:23.338 "abort": true, 00:20:23.338 "compare": true, 00:20:23.338 "compare_and_write": true, 00:20:23.338 "flush": true, 00:20:23.338 "nvme_admin": true, 00:20:23.338 "nvme_io": true, 00:20:23.338 "read": true, 00:20:23.338 "reset": true, 00:20:23.338 "unmap": false, 00:20:23.338 "write": true, 00:20:23.338 "write_zeroes": true 00:20:23.338 }, 00:20:23.338 "uuid": "5f98e193-154d-4fba-b75b-6847e660f292", 00:20:23.338 "zoned": false 00:20:23.338 } 00:20:23.338 ] 00:20:23.338 13:34:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.338 13:34:28 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:23.338 13:34:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.338 13:34:28 -- common/autotest_common.sh@10 -- # set +x 00:20:23.338 [2024-12-15 13:34:28.864453] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:23.338 [2024-12-15 13:34:28.864551] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d5a00 (9): Bad file descriptor 00:20:23.338 [2024-12-15 13:34:28.996698] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:23.338 13:34:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.338 13:34:29 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:23.338 13:34:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.338 13:34:29 -- common/autotest_common.sh@10 -- # set +x 00:20:23.338 [ 00:20:23.338 { 00:20:23.338 "aliases": [ 00:20:23.338 "5f98e193-154d-4fba-b75b-6847e660f292" 00:20:23.338 ], 00:20:23.338 "assigned_rate_limits": { 00:20:23.338 "r_mbytes_per_sec": 0, 00:20:23.338 "rw_ios_per_sec": 0, 00:20:23.338 "rw_mbytes_per_sec": 0, 00:20:23.338 "w_mbytes_per_sec": 0 00:20:23.338 }, 00:20:23.338 "block_size": 512, 00:20:23.338 "claimed": false, 00:20:23.338 "driver_specific": { 00:20:23.338 "mp_policy": "active_passive", 00:20:23.338 "nvme": [ 00:20:23.338 { 00:20:23.338 "ctrlr_data": { 00:20:23.338 "ana_reporting": false, 00:20:23.338 "cntlid": 2, 00:20:23.338 "firmware_revision": "24.01.1", 00:20:23.338 "model_number": "SPDK bdev Controller", 00:20:23.338 "multi_ctrlr": true, 00:20:23.338 "oacs": { 00:20:23.338 "firmware": 0, 00:20:23.338 "format": 0, 00:20:23.338 "ns_manage": 0, 00:20:23.338 "security": 0 00:20:23.338 }, 00:20:23.338 "serial_number": "00000000000000000000", 00:20:23.338 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:23.338 "vendor_id": "0x8086" 00:20:23.338 }, 00:20:23.338 "ns_data": { 00:20:23.338 "can_share": true, 00:20:23.338 "id": 1 00:20:23.338 }, 00:20:23.338 "trid": { 00:20:23.338 "adrfam": "IPv4", 00:20:23.338 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:23.338 "traddr": "10.0.0.2", 00:20:23.338 "trsvcid": "4420", 00:20:23.338 "trtype": "TCP" 00:20:23.338 }, 00:20:23.338 "vs": { 00:20:23.338 "nvme_version": "1.3" 00:20:23.338 } 00:20:23.338 } 00:20:23.338 ] 00:20:23.338 }, 00:20:23.338 "name": "nvme0n1", 00:20:23.338 "num_blocks": 2097152, 00:20:23.338 "product_name": "NVMe disk", 00:20:23.338 "supported_io_types": { 00:20:23.338 "abort": true, 00:20:23.338 "compare": true, 00:20:23.338 "compare_and_write": true, 00:20:23.338 "flush": true, 00:20:23.338 "nvme_admin": true, 00:20:23.338 "nvme_io": true, 00:20:23.338 "read": true, 00:20:23.338 "reset": true, 00:20:23.338 "unmap": false, 00:20:23.338 "write": true, 00:20:23.338 "write_zeroes": true 00:20:23.338 }, 00:20:23.338 "uuid": "5f98e193-154d-4fba-b75b-6847e660f292", 00:20:23.338 "zoned": false 00:20:23.338 } 00:20:23.338 ] 00:20:23.338 13:34:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.338 13:34:29 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.338 13:34:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.338 13:34:29 -- common/autotest_common.sh@10 -- # set +x 00:20:23.598 13:34:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.598 13:34:29 -- host/async_init.sh@53 -- # mktemp 00:20:23.598 13:34:29 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.DqTP8Vzcjp 00:20:23.598 13:34:29 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:23.598 13:34:29 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.DqTP8Vzcjp 00:20:23.598 13:34:29 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:23.598 13:34:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.598 13:34:29 -- common/autotest_common.sh@10 -- # set +x 00:20:23.598 13:34:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.598 13:34:29 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:23.598 13:34:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.598 13:34:29 -- common/autotest_common.sh@10 -- # set +x 00:20:23.598 [2024-12-15 13:34:29.060636] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:23.598 [2024-12-15 13:34:29.060778] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:23.598 13:34:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.598 13:34:29 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DqTP8Vzcjp 00:20:23.598 13:34:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.598 13:34:29 -- common/autotest_common.sh@10 -- # set +x 00:20:23.598 13:34:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.598 13:34:29 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.DqTP8Vzcjp 00:20:23.598 13:34:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.598 13:34:29 -- common/autotest_common.sh@10 -- # set +x 00:20:23.598 [2024-12-15 13:34:29.080617] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:23.598 nvme0n1 00:20:23.598 13:34:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.598 13:34:29 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:23.598 13:34:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.598 13:34:29 -- common/autotest_common.sh@10 -- # set +x 00:20:23.598 [ 00:20:23.598 { 00:20:23.598 "aliases": [ 00:20:23.598 "5f98e193-154d-4fba-b75b-6847e660f292" 00:20:23.598 ], 00:20:23.598 "assigned_rate_limits": { 00:20:23.598 "r_mbytes_per_sec": 0, 00:20:23.598 "rw_ios_per_sec": 0, 00:20:23.598 "rw_mbytes_per_sec": 0, 00:20:23.598 "w_mbytes_per_sec": 0 00:20:23.598 }, 00:20:23.598 "block_size": 512, 00:20:23.598 "claimed": false, 00:20:23.598 "driver_specific": { 00:20:23.598 "mp_policy": "active_passive", 00:20:23.598 "nvme": [ 00:20:23.598 { 00:20:23.598 "ctrlr_data": { 00:20:23.598 "ana_reporting": false, 00:20:23.598 "cntlid": 3, 00:20:23.598 "firmware_revision": "24.01.1", 00:20:23.598 "model_number": "SPDK bdev Controller", 00:20:23.598 "multi_ctrlr": true, 00:20:23.598 "oacs": { 00:20:23.598 "firmware": 0, 00:20:23.598 "format": 0, 00:20:23.598 "ns_manage": 0, 00:20:23.598 "security": 0 00:20:23.598 }, 00:20:23.598 "serial_number": "00000000000000000000", 00:20:23.598 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:23.598 "vendor_id": "0x8086" 00:20:23.598 }, 00:20:23.598 "ns_data": { 00:20:23.598 "can_share": true, 00:20:23.598 "id": 1 00:20:23.598 }, 00:20:23.598 "trid": { 00:20:23.598 "adrfam": "IPv4", 00:20:23.598 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:23.598 "traddr": "10.0.0.2", 00:20:23.598 "trsvcid": "4421", 00:20:23.598 "trtype": "TCP" 00:20:23.598 }, 00:20:23.598 "vs": { 00:20:23.598 "nvme_version": "1.3" 00:20:23.598 } 00:20:23.598 } 00:20:23.598 ] 00:20:23.598 }, 00:20:23.598 "name": "nvme0n1", 00:20:23.598 "num_blocks": 2097152, 00:20:23.598 "product_name": "NVMe disk", 00:20:23.598 "supported_io_types": { 00:20:23.598 "abort": true, 00:20:23.598 "compare": true, 00:20:23.598 "compare_and_write": true, 00:20:23.598 "flush": true, 00:20:23.598 "nvme_admin": true, 00:20:23.598 "nvme_io": true, 00:20:23.598 "read": true, 00:20:23.598 "reset": true, 00:20:23.598 "unmap": false, 00:20:23.598 "write": true, 00:20:23.598 "write_zeroes": true 00:20:23.598 }, 00:20:23.598 "uuid": "5f98e193-154d-4fba-b75b-6847e660f292", 00:20:23.598 "zoned": false 00:20:23.598 } 00:20:23.598 ] 00:20:23.598 13:34:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.598 13:34:29 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.598 13:34:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.598 13:34:29 -- common/autotest_common.sh@10 -- # set +x 00:20:23.598 13:34:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.598 13:34:29 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.DqTP8Vzcjp 00:20:23.598 13:34:29 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:23.598 13:34:29 -- host/async_init.sh@78 -- # nvmftestfini 00:20:23.598 13:34:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:23.598 13:34:29 -- nvmf/common.sh@116 -- # sync 00:20:23.598 13:34:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:23.598 13:34:29 -- nvmf/common.sh@119 -- # set +e 00:20:23.598 13:34:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:23.598 13:34:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:23.598 rmmod nvme_tcp 00:20:23.598 rmmod nvme_fabrics 00:20:23.857 rmmod nvme_keyring 00:20:23.857 13:34:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:23.857 13:34:29 -- nvmf/common.sh@123 -- # set -e 00:20:23.857 13:34:29 -- nvmf/common.sh@124 -- # return 0 00:20:23.857 13:34:29 -- nvmf/common.sh@477 -- # '[' -n 93201 ']' 00:20:23.857 13:34:29 -- nvmf/common.sh@478 -- # killprocess 93201 00:20:23.857 13:34:29 -- common/autotest_common.sh@936 -- # '[' -z 93201 ']' 00:20:23.857 13:34:29 -- common/autotest_common.sh@940 -- # kill -0 93201 00:20:23.857 13:34:29 -- common/autotest_common.sh@941 -- # uname 00:20:23.857 13:34:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:23.857 13:34:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93201 00:20:23.857 13:34:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:23.857 13:34:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:23.857 13:34:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93201' 00:20:23.857 killing process with pid 93201 00:20:23.857 13:34:29 -- common/autotest_common.sh@955 -- # kill 93201 00:20:23.857 13:34:29 -- common/autotest_common.sh@960 -- # wait 93201 00:20:23.857 13:34:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:23.857 13:34:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:23.857 13:34:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:23.857 13:34:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:23.857 13:34:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:23.857 13:34:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.857 13:34:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.857 13:34:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.117 13:34:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:24.117 00:20:24.117 real 0m2.714s 00:20:24.117 user 0m2.555s 00:20:24.117 sys 0m0.646s 00:20:24.117 13:34:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:24.117 13:34:29 -- common/autotest_common.sh@10 -- # set +x 00:20:24.117 ************************************ 00:20:24.117 END TEST nvmf_async_init 00:20:24.117 ************************************ 00:20:24.117 13:34:29 -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:24.117 13:34:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:24.117 13:34:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:24.117 13:34:29 -- common/autotest_common.sh@10 -- # set +x 00:20:24.117 ************************************ 00:20:24.117 START TEST dma 00:20:24.117 ************************************ 00:20:24.117 13:34:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:24.117 * Looking for test storage... 00:20:24.117 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:24.117 13:34:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:24.117 13:34:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:24.117 13:34:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:24.117 13:34:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:24.117 13:34:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:24.117 13:34:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:24.117 13:34:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:24.117 13:34:29 -- scripts/common.sh@335 -- # IFS=.-: 00:20:24.117 13:34:29 -- scripts/common.sh@335 -- # read -ra ver1 00:20:24.117 13:34:29 -- scripts/common.sh@336 -- # IFS=.-: 00:20:24.117 13:34:29 -- scripts/common.sh@336 -- # read -ra ver2 00:20:24.117 13:34:29 -- scripts/common.sh@337 -- # local 'op=<' 00:20:24.117 13:34:29 -- scripts/common.sh@339 -- # ver1_l=2 00:20:24.117 13:34:29 -- scripts/common.sh@340 -- # ver2_l=1 00:20:24.117 13:34:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:24.117 13:34:29 -- scripts/common.sh@343 -- # case "$op" in 00:20:24.117 13:34:29 -- scripts/common.sh@344 -- # : 1 00:20:24.117 13:34:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:24.117 13:34:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:24.117 13:34:29 -- scripts/common.sh@364 -- # decimal 1 00:20:24.117 13:34:29 -- scripts/common.sh@352 -- # local d=1 00:20:24.117 13:34:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:24.117 13:34:29 -- scripts/common.sh@354 -- # echo 1 00:20:24.117 13:34:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:24.117 13:34:29 -- scripts/common.sh@365 -- # decimal 2 00:20:24.117 13:34:29 -- scripts/common.sh@352 -- # local d=2 00:20:24.117 13:34:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:24.117 13:34:29 -- scripts/common.sh@354 -- # echo 2 00:20:24.117 13:34:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:24.117 13:34:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:24.117 13:34:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:24.117 13:34:29 -- scripts/common.sh@367 -- # return 0 00:20:24.117 13:34:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:24.117 13:34:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:24.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.117 --rc genhtml_branch_coverage=1 00:20:24.117 --rc genhtml_function_coverage=1 00:20:24.117 --rc genhtml_legend=1 00:20:24.117 --rc geninfo_all_blocks=1 00:20:24.117 --rc geninfo_unexecuted_blocks=1 00:20:24.117 00:20:24.117 ' 00:20:24.117 13:34:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:24.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.117 --rc genhtml_branch_coverage=1 00:20:24.117 --rc genhtml_function_coverage=1 00:20:24.117 --rc genhtml_legend=1 00:20:24.117 --rc geninfo_all_blocks=1 00:20:24.117 --rc geninfo_unexecuted_blocks=1 00:20:24.117 00:20:24.117 ' 00:20:24.117 13:34:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:24.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.117 --rc genhtml_branch_coverage=1 00:20:24.117 --rc genhtml_function_coverage=1 00:20:24.117 --rc genhtml_legend=1 00:20:24.117 --rc geninfo_all_blocks=1 00:20:24.117 --rc geninfo_unexecuted_blocks=1 00:20:24.117 00:20:24.117 ' 00:20:24.117 13:34:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:24.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.117 --rc genhtml_branch_coverage=1 00:20:24.117 --rc genhtml_function_coverage=1 00:20:24.117 --rc genhtml_legend=1 00:20:24.117 --rc geninfo_all_blocks=1 00:20:24.117 --rc geninfo_unexecuted_blocks=1 00:20:24.117 00:20:24.117 ' 00:20:24.117 13:34:29 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:24.117 13:34:29 -- nvmf/common.sh@7 -- # uname -s 00:20:24.117 13:34:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:24.117 13:34:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:24.117 13:34:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:24.117 13:34:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:24.117 13:34:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:24.117 13:34:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:24.117 13:34:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:24.117 13:34:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:24.117 13:34:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:24.117 13:34:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:24.376 13:34:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:20:24.376 13:34:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:20:24.376 13:34:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:24.376 13:34:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:24.376 13:34:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:24.376 13:34:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:24.376 13:34:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:24.377 13:34:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:24.377 13:34:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:24.377 13:34:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.377 13:34:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.377 13:34:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.377 13:34:29 -- paths/export.sh@5 -- # export PATH 00:20:24.377 13:34:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.377 13:34:29 -- nvmf/common.sh@46 -- # : 0 00:20:24.377 13:34:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:24.377 13:34:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:24.377 13:34:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:24.377 13:34:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:24.377 13:34:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:24.377 13:34:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:24.377 13:34:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:24.377 13:34:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:24.377 13:34:29 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:24.377 13:34:29 -- host/dma.sh@13 -- # exit 0 00:20:24.377 00:20:24.377 real 0m0.196s 00:20:24.377 user 0m0.118s 00:20:24.377 sys 0m0.089s 00:20:24.377 13:34:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:24.377 13:34:29 -- common/autotest_common.sh@10 -- # set +x 00:20:24.377 ************************************ 00:20:24.377 END TEST dma 00:20:24.377 ************************************ 00:20:24.377 13:34:29 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:24.377 13:34:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:24.377 13:34:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:24.377 13:34:29 -- common/autotest_common.sh@10 -- # set +x 00:20:24.377 ************************************ 00:20:24.377 START TEST nvmf_identify 00:20:24.377 ************************************ 00:20:24.377 13:34:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:24.377 * Looking for test storage... 00:20:24.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:24.377 13:34:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:24.377 13:34:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:24.377 13:34:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:24.377 13:34:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:24.377 13:34:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:24.377 13:34:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:24.377 13:34:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:24.377 13:34:30 -- scripts/common.sh@335 -- # IFS=.-: 00:20:24.377 13:34:30 -- scripts/common.sh@335 -- # read -ra ver1 00:20:24.377 13:34:30 -- scripts/common.sh@336 -- # IFS=.-: 00:20:24.377 13:34:30 -- scripts/common.sh@336 -- # read -ra ver2 00:20:24.377 13:34:30 -- scripts/common.sh@337 -- # local 'op=<' 00:20:24.377 13:34:30 -- scripts/common.sh@339 -- # ver1_l=2 00:20:24.377 13:34:30 -- scripts/common.sh@340 -- # ver2_l=1 00:20:24.377 13:34:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:24.377 13:34:30 -- scripts/common.sh@343 -- # case "$op" in 00:20:24.377 13:34:30 -- scripts/common.sh@344 -- # : 1 00:20:24.377 13:34:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:24.377 13:34:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:24.377 13:34:30 -- scripts/common.sh@364 -- # decimal 1 00:20:24.377 13:34:30 -- scripts/common.sh@352 -- # local d=1 00:20:24.377 13:34:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:24.377 13:34:30 -- scripts/common.sh@354 -- # echo 1 00:20:24.377 13:34:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:24.377 13:34:30 -- scripts/common.sh@365 -- # decimal 2 00:20:24.377 13:34:30 -- scripts/common.sh@352 -- # local d=2 00:20:24.377 13:34:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:24.377 13:34:30 -- scripts/common.sh@354 -- # echo 2 00:20:24.377 13:34:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:24.377 13:34:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:24.377 13:34:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:24.377 13:34:30 -- scripts/common.sh@367 -- # return 0 00:20:24.377 13:34:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:24.377 13:34:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:24.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.377 --rc genhtml_branch_coverage=1 00:20:24.377 --rc genhtml_function_coverage=1 00:20:24.377 --rc genhtml_legend=1 00:20:24.377 --rc geninfo_all_blocks=1 00:20:24.377 --rc geninfo_unexecuted_blocks=1 00:20:24.377 00:20:24.377 ' 00:20:24.377 13:34:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:24.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.377 --rc genhtml_branch_coverage=1 00:20:24.377 --rc genhtml_function_coverage=1 00:20:24.377 --rc genhtml_legend=1 00:20:24.377 --rc geninfo_all_blocks=1 00:20:24.377 --rc geninfo_unexecuted_blocks=1 00:20:24.377 00:20:24.377 ' 00:20:24.377 13:34:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:24.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.377 --rc genhtml_branch_coverage=1 00:20:24.377 --rc genhtml_function_coverage=1 00:20:24.377 --rc genhtml_legend=1 00:20:24.377 --rc geninfo_all_blocks=1 00:20:24.377 --rc geninfo_unexecuted_blocks=1 00:20:24.377 00:20:24.377 ' 00:20:24.377 13:34:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:24.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.377 --rc genhtml_branch_coverage=1 00:20:24.377 --rc genhtml_function_coverage=1 00:20:24.377 --rc genhtml_legend=1 00:20:24.377 --rc geninfo_all_blocks=1 00:20:24.377 --rc geninfo_unexecuted_blocks=1 00:20:24.377 00:20:24.377 ' 00:20:24.377 13:34:30 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:24.377 13:34:30 -- nvmf/common.sh@7 -- # uname -s 00:20:24.377 13:34:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:24.377 13:34:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:24.377 13:34:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:24.377 13:34:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:24.377 13:34:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:24.377 13:34:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:24.377 13:34:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:24.377 13:34:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:24.377 13:34:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:24.377 13:34:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:24.377 13:34:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:20:24.637 13:34:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:20:24.637 13:34:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:24.637 13:34:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:24.637 13:34:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:24.637 13:34:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:24.637 13:34:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:24.637 13:34:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:24.637 13:34:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:24.637 13:34:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.637 13:34:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.637 13:34:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.637 13:34:30 -- paths/export.sh@5 -- # export PATH 00:20:24.637 13:34:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.637 13:34:30 -- nvmf/common.sh@46 -- # : 0 00:20:24.637 13:34:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:24.637 13:34:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:24.637 13:34:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:24.637 13:34:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:24.637 13:34:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:24.637 13:34:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:24.637 13:34:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:24.637 13:34:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:24.637 13:34:30 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:24.637 13:34:30 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:24.637 13:34:30 -- host/identify.sh@14 -- # nvmftestinit 00:20:24.637 13:34:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:24.637 13:34:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:24.637 13:34:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:24.637 13:34:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:24.637 13:34:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:24.637 13:34:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.637 13:34:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:24.637 13:34:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.637 13:34:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:24.637 13:34:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:24.637 13:34:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:24.637 13:34:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:24.637 13:34:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:24.637 13:34:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:24.637 13:34:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:24.637 13:34:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:24.637 13:34:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:24.637 13:34:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:24.637 13:34:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:24.637 13:34:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:24.637 13:34:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:24.637 13:34:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:24.637 13:34:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:24.637 13:34:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:24.637 13:34:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:24.637 13:34:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:24.637 13:34:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:24.637 13:34:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:24.637 Cannot find device "nvmf_tgt_br" 00:20:24.637 13:34:30 -- nvmf/common.sh@154 -- # true 00:20:24.637 13:34:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:24.637 Cannot find device "nvmf_tgt_br2" 00:20:24.637 13:34:30 -- nvmf/common.sh@155 -- # true 00:20:24.637 13:34:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:24.637 13:34:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:24.637 Cannot find device "nvmf_tgt_br" 00:20:24.637 13:34:30 -- nvmf/common.sh@157 -- # true 00:20:24.637 13:34:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:24.637 Cannot find device "nvmf_tgt_br2" 00:20:24.637 13:34:30 -- nvmf/common.sh@158 -- # true 00:20:24.637 13:34:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:24.637 13:34:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:24.637 13:34:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:24.637 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:24.637 13:34:30 -- nvmf/common.sh@161 -- # true 00:20:24.637 13:34:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:24.637 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:24.637 13:34:30 -- nvmf/common.sh@162 -- # true 00:20:24.637 13:34:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:24.637 13:34:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:24.637 13:34:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:24.637 13:34:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:24.637 13:34:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:24.637 13:34:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:24.637 13:34:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:24.637 13:34:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:24.637 13:34:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:24.637 13:34:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:24.637 13:34:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:24.637 13:34:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:24.637 13:34:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:24.637 13:34:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:24.637 13:34:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:24.637 13:34:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:24.637 13:34:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:24.637 13:34:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:24.637 13:34:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:24.897 13:34:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:24.897 13:34:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:24.897 13:34:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:24.897 13:34:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:24.897 13:34:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:24.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:24.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:20:24.897 00:20:24.897 --- 10.0.0.2 ping statistics --- 00:20:24.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.897 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:20:24.897 13:34:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:24.897 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:24.897 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:20:24.897 00:20:24.897 --- 10.0.0.3 ping statistics --- 00:20:24.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.897 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:20:24.897 13:34:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:24.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:24.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:20:24.897 00:20:24.897 --- 10.0.0.1 ping statistics --- 00:20:24.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.897 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:20:24.897 13:34:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:24.897 13:34:30 -- nvmf/common.sh@421 -- # return 0 00:20:24.897 13:34:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:24.897 13:34:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:24.897 13:34:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:24.897 13:34:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:24.897 13:34:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:24.897 13:34:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:24.897 13:34:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:24.897 13:34:30 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:24.897 13:34:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:24.897 13:34:30 -- common/autotest_common.sh@10 -- # set +x 00:20:24.897 13:34:30 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:24.897 13:34:30 -- host/identify.sh@19 -- # nvmfpid=93489 00:20:24.897 13:34:30 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:24.897 13:34:30 -- host/identify.sh@23 -- # waitforlisten 93489 00:20:24.897 13:34:30 -- common/autotest_common.sh@829 -- # '[' -z 93489 ']' 00:20:24.897 13:34:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.897 13:34:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:24.897 13:34:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.897 13:34:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:24.897 13:34:30 -- common/autotest_common.sh@10 -- # set +x 00:20:24.897 [2024-12-15 13:34:30.459212] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:24.897 [2024-12-15 13:34:30.459299] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:25.156 [2024-12-15 13:34:30.604038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:25.156 [2024-12-15 13:34:30.676318] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:25.156 [2024-12-15 13:34:30.676495] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:25.156 [2024-12-15 13:34:30.676512] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:25.156 [2024-12-15 13:34:30.676524] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:25.156 [2024-12-15 13:34:30.676650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.156 [2024-12-15 13:34:30.676988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:25.156 [2024-12-15 13:34:30.677437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:25.156 [2024-12-15 13:34:30.677452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.092 13:34:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:26.092 13:34:31 -- common/autotest_common.sh@862 -- # return 0 00:20:26.092 13:34:31 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:26.092 13:34:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.092 13:34:31 -- common/autotest_common.sh@10 -- # set +x 00:20:26.092 [2024-12-15 13:34:31.498855] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.092 13:34:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.092 13:34:31 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:26.092 13:34:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:26.092 13:34:31 -- common/autotest_common.sh@10 -- # set +x 00:20:26.092 13:34:31 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:26.092 13:34:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.092 13:34:31 -- common/autotest_common.sh@10 -- # set +x 00:20:26.092 Malloc0 00:20:26.092 13:34:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.092 13:34:31 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:26.092 13:34:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.092 13:34:31 -- common/autotest_common.sh@10 -- # set +x 00:20:26.092 13:34:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.092 13:34:31 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:26.092 13:34:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.092 13:34:31 -- common/autotest_common.sh@10 -- # set +x 00:20:26.092 13:34:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.092 13:34:31 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:26.092 13:34:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.092 13:34:31 -- common/autotest_common.sh@10 -- # set +x 00:20:26.092 [2024-12-15 13:34:31.604711] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:26.092 13:34:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.092 13:34:31 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:26.092 13:34:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.092 13:34:31 -- common/autotest_common.sh@10 -- # set +x 00:20:26.092 13:34:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.092 13:34:31 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:26.092 13:34:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.092 13:34:31 -- common/autotest_common.sh@10 -- # set +x 00:20:26.092 [2024-12-15 13:34:31.620440] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:26.092 [ 00:20:26.092 { 00:20:26.092 "allow_any_host": true, 00:20:26.092 "hosts": [], 00:20:26.092 "listen_addresses": [ 00:20:26.092 { 00:20:26.092 "adrfam": "IPv4", 00:20:26.092 "traddr": "10.0.0.2", 00:20:26.092 "transport": "TCP", 00:20:26.092 "trsvcid": "4420", 00:20:26.092 "trtype": "TCP" 00:20:26.092 } 00:20:26.092 ], 00:20:26.092 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:26.092 "subtype": "Discovery" 00:20:26.092 }, 00:20:26.092 { 00:20:26.092 "allow_any_host": true, 00:20:26.092 "hosts": [], 00:20:26.092 "listen_addresses": [ 00:20:26.092 { 00:20:26.092 "adrfam": "IPv4", 00:20:26.092 "traddr": "10.0.0.2", 00:20:26.092 "transport": "TCP", 00:20:26.092 "trsvcid": "4420", 00:20:26.092 "trtype": "TCP" 00:20:26.092 } 00:20:26.092 ], 00:20:26.092 "max_cntlid": 65519, 00:20:26.092 "max_namespaces": 32, 00:20:26.092 "min_cntlid": 1, 00:20:26.092 "model_number": "SPDK bdev Controller", 00:20:26.092 "namespaces": [ 00:20:26.092 { 00:20:26.092 "bdev_name": "Malloc0", 00:20:26.092 "eui64": "ABCDEF0123456789", 00:20:26.092 "name": "Malloc0", 00:20:26.092 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:26.092 "nsid": 1, 00:20:26.093 "uuid": "eeaef630-0058-490c-bc41-76fd8477c4fb" 00:20:26.093 } 00:20:26.093 ], 00:20:26.093 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.093 "serial_number": "SPDK00000000000001", 00:20:26.093 "subtype": "NVMe" 00:20:26.093 } 00:20:26.093 ] 00:20:26.093 13:34:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.093 13:34:31 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:26.093 [2024-12-15 13:34:31.652605] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:26.093 [2024-12-15 13:34:31.652664] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93542 ] 00:20:26.355 [2024-12-15 13:34:31.784200] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:26.355 [2024-12-15 13:34:31.784272] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:26.355 [2024-12-15 13:34:31.784279] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:26.355 [2024-12-15 13:34:31.784288] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:26.355 [2024-12-15 13:34:31.784297] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:26.355 [2024-12-15 13:34:31.784424] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:26.355 [2024-12-15 13:34:31.784496] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xfe4510 0 00:20:26.355 [2024-12-15 13:34:31.792624] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:26.355 [2024-12-15 13:34:31.792643] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:26.355 [2024-12-15 13:34:31.792648] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:26.355 [2024-12-15 13:34:31.792652] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:26.355 [2024-12-15 13:34:31.792695] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.355 [2024-12-15 13:34:31.792701] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.355 [2024-12-15 13:34:31.792705] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe4510) 00:20:26.355 [2024-12-15 13:34:31.792717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:26.355 [2024-12-15 13:34:31.792746] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10308a0, cid 0, qid 0 00:20:26.355 [2024-12-15 13:34:31.800603] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.355 [2024-12-15 13:34:31.800623] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.355 [2024-12-15 13:34:31.800643] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.355 [2024-12-15 13:34:31.800648] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10308a0) on tqpair=0xfe4510 00:20:26.355 [2024-12-15 13:34:31.800660] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:26.355 [2024-12-15 13:34:31.800666] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:26.355 [2024-12-15 13:34:31.800672] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:26.355 [2024-12-15 13:34:31.800692] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.355 [2024-12-15 13:34:31.800697] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.355 [2024-12-15 13:34:31.800701] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe4510) 00:20:26.355 [2024-12-15 13:34:31.800710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.355 [2024-12-15 13:34:31.800738] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10308a0, cid 0, qid 0 00:20:26.355 [2024-12-15 13:34:31.800807] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.355 [2024-12-15 13:34:31.800814] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.355 [2024-12-15 13:34:31.800817] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.355 [2024-12-15 13:34:31.800821] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10308a0) on tqpair=0xfe4510 00:20:26.355 [2024-12-15 13:34:31.800827] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:26.355 [2024-12-15 13:34:31.800834] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:26.355 [2024-12-15 13:34:31.800841] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.355 [2024-12-15 13:34:31.800845] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.355 [2024-12-15 13:34:31.800848] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe4510) 00:20:26.355 [2024-12-15 13:34:31.800855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.355 [2024-12-15 13:34:31.800889] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10308a0, cid 0, qid 0 00:20:26.355 [2024-12-15 13:34:31.800962] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.355 [2024-12-15 13:34:31.800968] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.355 [2024-12-15 13:34:31.800971] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.355 [2024-12-15 13:34:31.800975] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10308a0) on tqpair=0xfe4510 00:20:26.355 [2024-12-15 13:34:31.800983] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:26.355 [2024-12-15 13:34:31.800991] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:26.355 [2024-12-15 13:34:31.800998] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.355 [2024-12-15 13:34:31.801002] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.355 [2024-12-15 13:34:31.801005] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe4510) 00:20:26.355 [2024-12-15 13:34:31.801013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.355 [2024-12-15 13:34:31.801030] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10308a0, cid 0, qid 0 00:20:26.355 [2024-12-15 13:34:31.801081] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.355 [2024-12-15 13:34:31.801087] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.355 [2024-12-15 13:34:31.801091] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.355 [2024-12-15 13:34:31.801095] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10308a0) on tqpair=0xfe4510 00:20:26.355 [2024-12-15 13:34:31.801101] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:26.355 [2024-12-15 13:34:31.801111] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.355 [2024-12-15 13:34:31.801115] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.355 [2024-12-15 13:34:31.801119] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe4510) 00:20:26.355 [2024-12-15 13:34:31.801126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.355 [2024-12-15 13:34:31.801143] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10308a0, cid 0, qid 0 00:20:26.355 [2024-12-15 13:34:31.801209] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.355 [2024-12-15 13:34:31.801216] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.355 [2024-12-15 13:34:31.801219] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.355 [2024-12-15 13:34:31.801223] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10308a0) on tqpair=0xfe4510 00:20:26.355 [2024-12-15 13:34:31.801229] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:26.355 [2024-12-15 13:34:31.801234] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:26.355 [2024-12-15 13:34:31.801241] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:26.355 [2024-12-15 13:34:31.801346] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:26.355 [2024-12-15 13:34:31.801351] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:26.355 [2024-12-15 13:34:31.801360] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.355 [2024-12-15 13:34:31.801364] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.355 [2024-12-15 13:34:31.801368] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe4510) 00:20:26.355 [2024-12-15 13:34:31.801375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.355 [2024-12-15 13:34:31.801393] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10308a0, cid 0, qid 0 00:20:26.355 [2024-12-15 13:34:31.801447] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.355 [2024-12-15 13:34:31.801453] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.355 [2024-12-15 13:34:31.801457] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.355 [2024-12-15 13:34:31.801460] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10308a0) on tqpair=0xfe4510 00:20:26.355 [2024-12-15 13:34:31.801466] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:26.355 [2024-12-15 13:34:31.801476] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.355 [2024-12-15 13:34:31.801480] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.355 [2024-12-15 13:34:31.801484] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe4510) 00:20:26.355 [2024-12-15 13:34:31.801491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.355 [2024-12-15 13:34:31.801521] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10308a0, cid 0, qid 0 00:20:26.355 [2024-12-15 13:34:31.801576] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.355 [2024-12-15 13:34:31.801583] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.355 [2024-12-15 13:34:31.801598] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.356 [2024-12-15 13:34:31.801603] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10308a0) on tqpair=0xfe4510 00:20:26.356 [2024-12-15 13:34:31.801609] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:26.356 [2024-12-15 13:34:31.801614] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:26.356 [2024-12-15 13:34:31.801622] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:26.356 [2024-12-15 13:34:31.801638] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:26.356 [2024-12-15 13:34:31.801648] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.356 [2024-12-15 13:34:31.801652] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.356 [2024-12-15 13:34:31.801656] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe4510) 00:20:26.356 [2024-12-15 13:34:31.801664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.356 [2024-12-15 13:34:31.801684] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10308a0, cid 0, qid 0 00:20:26.356 [2024-12-15 13:34:31.801776] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:26.356 [2024-12-15 13:34:31.801784] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:26.356 [2024-12-15 13:34:31.801787] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:26.356 [2024-12-15 13:34:31.801791] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfe4510): datao=0, datal=4096, cccid=0 00:20:26.356 [2024-12-15 13:34:31.801796] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10308a0) on tqpair(0xfe4510): expected_datao=0, payload_size=4096 00:20:26.356 [2024-12-15 13:34:31.801805] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:26.356 [2024-12-15 13:34:31.801809] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:26.356 [2024-12-15 13:34:31.801817] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.356 [2024-12-15 13:34:31.801823] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.356 [2024-12-15 13:34:31.801826] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.356 [2024-12-15 13:34:31.801831] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10308a0) on tqpair=0xfe4510 00:20:26.356 [2024-12-15 13:34:31.801840] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:26.356 [2024-12-15 13:34:31.801845] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:26.356 [2024-12-15 13:34:31.801850] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:26.356 [2024-12-15 13:34:31.801855] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:26.356 [2024-12-15 13:34:31.801860] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:26.356 [2024-12-15 13:34:31.801865] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:26.356 [2024-12-15 13:34:31.801877] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:26.356 [2024-12-15 13:34:31.801885] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.356 [2024-12-15 13:34:31.801890] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.356 [2024-12-15 13:34:31.801893] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe4510) 00:20:26.356 [2024-12-15 13:34:31.801901] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:26.356 [2024-12-15 13:34:31.801921] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10308a0, cid 0, qid 0 00:20:26.356 [2024-12-15 13:34:31.801984] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.356 [2024-12-15 13:34:31.801991] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.356 [2024-12-15 13:34:31.801994] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.356 [2024-12-15 13:34:31.801998] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10308a0) on tqpair=0xfe4510 00:20:26.356 [2024-12-15 13:34:31.802006] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.356 [2024-12-15 13:34:31.802010] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.356 [2024-12-15 13:34:31.802014] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfe4510) 00:20:26.356 [2024-12-15 13:34:31.802020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:26.356 [2024-12-15 13:34:31.802026] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.356 [2024-12-15 13:34:31.802030] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.356 [2024-12-15 13:34:31.802033] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xfe4510) 00:20:26.356 [2024-12-15 13:34:31.802039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:26.356 [2024-12-15 13:34:31.802045] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.356 [2024-12-15 13:34:31.802048] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.356 [2024-12-15 13:34:31.802052] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xfe4510) 00:20:26.356 [2024-12-15 13:34:31.802058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:26.356 [2024-12-15 13:34:31.802063] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.356 [2024-12-15 13:34:31.802067] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.356 [2024-12-15 13:34:31.802071] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4510) 00:20:26.356 [2024-12-15 13:34:31.802076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:26.356 [2024-12-15 13:34:31.802081] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:26.356 [2024-12-15 13:34:31.802094] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:26.356 [2024-12-15 13:34:31.802101] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.356 [2024-12-15 13:34:31.802105] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.356 [2024-12-15 13:34:31.802109] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfe4510) 00:20:26.356 [2024-12-15 13:34:31.802116] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.356 [2024-12-15 13:34:31.802136] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10308a0, cid 0, qid 0 00:20:26.356 [2024-12-15 13:34:31.802143] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1030a00, cid 1, qid 0 00:20:26.356 [2024-12-15 13:34:31.802147] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1030b60, cid 2, qid 0 00:20:26.356 [2024-12-15 13:34:31.802152] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1030cc0, cid 3, qid 0 00:20:26.356 [2024-12-15 13:34:31.802157] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1030e20, cid 4, qid 0 00:20:26.356 [2024-12-15 13:34:31.802250] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.356 [2024-12-15 13:34:31.802257] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.356 [2024-12-15 13:34:31.802260] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.356 [2024-12-15 13:34:31.802264] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1030e20) on tqpair=0xfe4510 00:20:26.356 [2024-12-15 13:34:31.802270] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:26.356 [2024-12-15 13:34:31.802275] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:26.356 [2024-12-15 13:34:31.802286] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.356 [2024-12-15 13:34:31.802290] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.356 [2024-12-15 13:34:31.802294] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfe4510) 00:20:26.356 [2024-12-15 13:34:31.802301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.356 [2024-12-15 13:34:31.802318] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1030e20, cid 4, qid 0 00:20:26.356 [2024-12-15 13:34:31.802380] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:26.356 [2024-12-15 13:34:31.802387] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:26.356 [2024-12-15 13:34:31.802391] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:26.356 [2024-12-15 13:34:31.802394] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfe4510): datao=0, datal=4096, cccid=4 00:20:26.356 [2024-12-15 13:34:31.802399] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1030e20) on tqpair(0xfe4510): expected_datao=0, payload_size=4096 00:20:26.356 [2024-12-15 13:34:31.802406] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:26.356 [2024-12-15 13:34:31.802410] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:26.356 [2024-12-15 13:34:31.802418] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.356 [2024-12-15 13:34:31.802424] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.356 [2024-12-15 13:34:31.802427] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.356 [2024-12-15 13:34:31.802431] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1030e20) on tqpair=0xfe4510 00:20:26.357 [2024-12-15 13:34:31.802445] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:26.357 [2024-12-15 13:34:31.802477] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.357 [2024-12-15 13:34:31.802484] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.357 [2024-12-15 13:34:31.802487] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfe4510) 00:20:26.357 [2024-12-15 13:34:31.802495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.357 [2024-12-15 13:34:31.802502] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.357 [2024-12-15 13:34:31.802506] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.357 [2024-12-15 13:34:31.802510] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfe4510) 00:20:26.357 [2024-12-15 13:34:31.802515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:26.357 [2024-12-15 13:34:31.802540] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1030e20, cid 4, qid 0 00:20:26.357 [2024-12-15 13:34:31.802548] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1030f80, cid 5, qid 0 00:20:26.357 [2024-12-15 13:34:31.802657] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:26.357 [2024-12-15 13:34:31.802666] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:26.357 [2024-12-15 13:34:31.802669] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:26.357 [2024-12-15 13:34:31.802673] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfe4510): datao=0, datal=1024, cccid=4 00:20:26.357 [2024-12-15 13:34:31.802678] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1030e20) on tqpair(0xfe4510): expected_datao=0, payload_size=1024 00:20:26.357 [2024-12-15 13:34:31.802685] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:26.357 [2024-12-15 13:34:31.802689] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:26.357 [2024-12-15 13:34:31.802694] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.357 [2024-12-15 13:34:31.802700] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.357 [2024-12-15 13:34:31.802703] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.357 [2024-12-15 13:34:31.802707] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1030f80) on tqpair=0xfe4510 00:20:26.357 [2024-12-15 13:34:31.846661] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.357 [2024-12-15 13:34:31.846682] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.357 [2024-12-15 13:34:31.846703] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.357 [2024-12-15 13:34:31.846708] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1030e20) on tqpair=0xfe4510 00:20:26.357 [2024-12-15 13:34:31.846722] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.357 [2024-12-15 13:34:31.846726] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.357 [2024-12-15 13:34:31.846730] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfe4510) 00:20:26.357 [2024-12-15 13:34:31.846738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.357 [2024-12-15 13:34:31.846769] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1030e20, cid 4, qid 0 00:20:26.357 [2024-12-15 13:34:31.846863] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:26.357 [2024-12-15 13:34:31.846869] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:26.357 [2024-12-15 13:34:31.846872] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:26.357 [2024-12-15 13:34:31.846876] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfe4510): datao=0, datal=3072, cccid=4 00:20:26.357 [2024-12-15 13:34:31.846880] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1030e20) on tqpair(0xfe4510): expected_datao=0, payload_size=3072 00:20:26.357 [2024-12-15 13:34:31.846888] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:26.357 [2024-12-15 13:34:31.846891] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:26.357 [2024-12-15 13:34:31.846899] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.357 [2024-12-15 13:34:31.846905] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.357 [2024-12-15 13:34:31.846908] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.357 [2024-12-15 13:34:31.846912] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1030e20) on tqpair=0xfe4510 00:20:26.357 [2024-12-15 13:34:31.846937] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.357 [2024-12-15 13:34:31.846941] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.357 [2024-12-15 13:34:31.846945] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfe4510) 00:20:26.357 [2024-12-15 13:34:31.846952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.357 [2024-12-15 13:34:31.846992] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1030e20, cid 4, qid 0 00:20:26.357 [2024-12-15 13:34:31.847066] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:26.357 [2024-12-15 13:34:31.847073] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:26.357 [2024-12-15 13:34:31.847076] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:26.357 [2024-12-15 13:34:31.847080] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfe4510): datao=0, datal=8, cccid=4 00:20:26.357 [2024-12-15 13:34:31.847084] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1030e20) on tqpair(0xfe4510): expected_datao=0, payload_size=8 00:20:26.357 [2024-12-15 13:34:31.847091] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:26.357 [2024-12-15 13:34:31.847095] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:26.357 ===================================================== 00:20:26.357 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:26.357 ===================================================== 00:20:26.357 Controller Capabilities/Features 00:20:26.357 ================================ 00:20:26.357 Vendor ID: 0000 00:20:26.357 Subsystem Vendor ID: 0000 00:20:26.357 Serial Number: .................... 00:20:26.357 Model Number: ........................................ 00:20:26.357 Firmware Version: 24.01.1 00:20:26.357 Recommended Arb Burst: 0 00:20:26.357 IEEE OUI Identifier: 00 00 00 00:20:26.357 Multi-path I/O 00:20:26.357 May have multiple subsystem ports: No 00:20:26.357 May have multiple controllers: No 00:20:26.357 Associated with SR-IOV VF: No 00:20:26.357 Max Data Transfer Size: 131072 00:20:26.357 Max Number of Namespaces: 0 00:20:26.357 Max Number of I/O Queues: 1024 00:20:26.357 NVMe Specification Version (VS): 1.3 00:20:26.357 NVMe Specification Version (Identify): 1.3 00:20:26.357 Maximum Queue Entries: 128 00:20:26.357 Contiguous Queues Required: Yes 00:20:26.357 Arbitration Mechanisms Supported 00:20:26.357 Weighted Round Robin: Not Supported 00:20:26.357 Vendor Specific: Not Supported 00:20:26.357 Reset Timeout: 15000 ms 00:20:26.357 Doorbell Stride: 4 bytes 00:20:26.357 NVM Subsystem Reset: Not Supported 00:20:26.357 Command Sets Supported 00:20:26.357 NVM Command Set: Supported 00:20:26.357 Boot Partition: Not Supported 00:20:26.357 Memory Page Size Minimum: 4096 bytes 00:20:26.357 Memory Page Size Maximum: 4096 bytes 00:20:26.357 Persistent Memory Region: Not Supported 00:20:26.357 Optional Asynchronous Events Supported 00:20:26.357 Namespace Attribute Notices: Not Supported 00:20:26.357 Firmware Activation Notices: Not Supported 00:20:26.357 ANA Change Notices: Not Supported 00:20:26.357 PLE Aggregate Log Change Notices: Not Supported 00:20:26.357 LBA Status Info Alert Notices: Not Supported 00:20:26.357 EGE Aggregate Log Change Notices: Not Supported 00:20:26.357 Normal NVM Subsystem Shutdown event: Not Supported 00:20:26.357 Zone Descriptor Change Notices: Not Supported 00:20:26.357 Discovery Log Change Notices: Supported 00:20:26.357 Controller Attributes 00:20:26.357 128-bit Host Identifier: Not Supported 00:20:26.357 Non-Operational Permissive Mode: Not Supported 00:20:26.357 NVM Sets: Not Supported 00:20:26.357 Read Recovery Levels: Not Supported 00:20:26.357 Endurance Groups: Not Supported 00:20:26.357 Predictable Latency Mode: Not Supported 00:20:26.357 Traffic Based Keep ALive: Not Supported 00:20:26.357 Namespace Granularity: Not Supported 00:20:26.357 SQ Associations: Not Supported 00:20:26.357 UUID List: Not Supported 00:20:26.357 Multi-Domain Subsystem: Not Supported 00:20:26.357 Fixed Capacity Management: Not Supported 00:20:26.357 Variable Capacity Management: Not Supported 00:20:26.357 Delete Endurance Group: Not Supported 00:20:26.357 Delete NVM Set: Not Supported 00:20:26.357 Extended LBA Formats Supported: Not Supported 00:20:26.357 Flexible Data Placement Supported: Not Supported 00:20:26.357 00:20:26.357 Controller Memory Buffer Support 00:20:26.357 ================================ 00:20:26.357 Supported: No 00:20:26.357 00:20:26.357 Persistent Memory Region Support 00:20:26.357 ================================ 00:20:26.357 Supported: No 00:20:26.357 00:20:26.357 Admin Command Set Attributes 00:20:26.358 ============================ 00:20:26.358 Security Send/Receive: Not Supported 00:20:26.358 Format NVM: Not Supported 00:20:26.358 Firmware Activate/Download: Not Supported 00:20:26.358 Namespace Management: Not Supported 00:20:26.358 Device Self-Test: Not Supported 00:20:26.358 Directives: Not Supported 00:20:26.358 NVMe-MI: Not Supported 00:20:26.358 Virtualization Management: Not Supported 00:20:26.358 Doorbell Buffer Config: Not Supported 00:20:26.358 Get LBA Status Capability: Not Supported 00:20:26.358 Command & Feature Lockdown Capability: Not Supported 00:20:26.358 Abort Command Limit: 1 00:20:26.358 Async Event Request Limit: 4 00:20:26.358 Number of Firmware Slots: N/A 00:20:26.358 Firmware Slot 1 Read-Only: N/A 00:20:26.358 Fi[2024-12-15 13:34:31.887726] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.358 [2024-12-15 13:34:31.887751] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.358 [2024-12-15 13:34:31.887772] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.358 [2024-12-15 13:34:31.887777] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1030e20) on tqpair=0xfe4510 00:20:26.358 rmware Activation Without Reset: N/A 00:20:26.358 Multiple Update Detection Support: N/A 00:20:26.358 Firmware Update Granularity: No Information Provided 00:20:26.358 Per-Namespace SMART Log: No 00:20:26.358 Asymmetric Namespace Access Log Page: Not Supported 00:20:26.358 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:26.358 Command Effects Log Page: Not Supported 00:20:26.358 Get Log Page Extended Data: Supported 00:20:26.358 Telemetry Log Pages: Not Supported 00:20:26.358 Persistent Event Log Pages: Not Supported 00:20:26.358 Supported Log Pages Log Page: May Support 00:20:26.358 Commands Supported & Effects Log Page: Not Supported 00:20:26.358 Feature Identifiers & Effects Log Page:May Support 00:20:26.358 NVMe-MI Commands & Effects Log Page: May Support 00:20:26.358 Data Area 4 for Telemetry Log: Not Supported 00:20:26.358 Error Log Page Entries Supported: 128 00:20:26.358 Keep Alive: Not Supported 00:20:26.358 00:20:26.358 NVM Command Set Attributes 00:20:26.358 ========================== 00:20:26.358 Submission Queue Entry Size 00:20:26.358 Max: 1 00:20:26.358 Min: 1 00:20:26.358 Completion Queue Entry Size 00:20:26.358 Max: 1 00:20:26.358 Min: 1 00:20:26.358 Number of Namespaces: 0 00:20:26.358 Compare Command: Not Supported 00:20:26.358 Write Uncorrectable Command: Not Supported 00:20:26.358 Dataset Management Command: Not Supported 00:20:26.358 Write Zeroes Command: Not Supported 00:20:26.358 Set Features Save Field: Not Supported 00:20:26.358 Reservations: Not Supported 00:20:26.358 Timestamp: Not Supported 00:20:26.358 Copy: Not Supported 00:20:26.358 Volatile Write Cache: Not Present 00:20:26.358 Atomic Write Unit (Normal): 1 00:20:26.358 Atomic Write Unit (PFail): 1 00:20:26.358 Atomic Compare & Write Unit: 1 00:20:26.358 Fused Compare & Write: Supported 00:20:26.358 Scatter-Gather List 00:20:26.358 SGL Command Set: Supported 00:20:26.358 SGL Keyed: Supported 00:20:26.358 SGL Bit Bucket Descriptor: Not Supported 00:20:26.358 SGL Metadata Pointer: Not Supported 00:20:26.358 Oversized SGL: Not Supported 00:20:26.358 SGL Metadata Address: Not Supported 00:20:26.358 SGL Offset: Supported 00:20:26.358 Transport SGL Data Block: Not Supported 00:20:26.358 Replay Protected Memory Block: Not Supported 00:20:26.358 00:20:26.358 Firmware Slot Information 00:20:26.358 ========================= 00:20:26.358 Active slot: 0 00:20:26.358 00:20:26.358 00:20:26.358 Error Log 00:20:26.358 ========= 00:20:26.358 00:20:26.358 Active Namespaces 00:20:26.358 ================= 00:20:26.358 Discovery Log Page 00:20:26.358 ================== 00:20:26.358 Generation Counter: 2 00:20:26.358 Number of Records: 2 00:20:26.358 Record Format: 0 00:20:26.358 00:20:26.358 Discovery Log Entry 0 00:20:26.358 ---------------------- 00:20:26.358 Transport Type: 3 (TCP) 00:20:26.358 Address Family: 1 (IPv4) 00:20:26.358 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:26.358 Entry Flags: 00:20:26.358 Duplicate Returned Information: 1 00:20:26.358 Explicit Persistent Connection Support for Discovery: 1 00:20:26.358 Transport Requirements: 00:20:26.358 Secure Channel: Not Required 00:20:26.358 Port ID: 0 (0x0000) 00:20:26.358 Controller ID: 65535 (0xffff) 00:20:26.358 Admin Max SQ Size: 128 00:20:26.358 Transport Service Identifier: 4420 00:20:26.358 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:26.358 Transport Address: 10.0.0.2 00:20:26.358 Discovery Log Entry 1 00:20:26.358 ---------------------- 00:20:26.358 Transport Type: 3 (TCP) 00:20:26.358 Address Family: 1 (IPv4) 00:20:26.358 Subsystem Type: 2 (NVM Subsystem) 00:20:26.358 Entry Flags: 00:20:26.358 Duplicate Returned Information: 0 00:20:26.358 Explicit Persistent Connection Support for Discovery: 0 00:20:26.358 Transport Requirements: 00:20:26.358 Secure Channel: Not Required 00:20:26.358 Port ID: 0 (0x0000) 00:20:26.358 Controller ID: 65535 (0xffff) 00:20:26.358 Admin Max SQ Size: 128 00:20:26.358 Transport Service Identifier: 4420 00:20:26.358 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:26.358 Transport Address: 10.0.0.2 [2024-12-15 13:34:31.887888] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:26.358 [2024-12-15 13:34:31.887905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.358 [2024-12-15 13:34:31.887912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.358 [2024-12-15 13:34:31.887918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.358 [2024-12-15 13:34:31.887924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.358 [2024-12-15 13:34:31.887934] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.358 [2024-12-15 13:34:31.887938] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.358 [2024-12-15 13:34:31.887942] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4510) 00:20:26.358 [2024-12-15 13:34:31.887951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.358 [2024-12-15 13:34:31.888006] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1030cc0, cid 3, qid 0 00:20:26.358 [2024-12-15 13:34:31.888071] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.358 [2024-12-15 13:34:31.888077] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.358 [2024-12-15 13:34:31.888081] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.358 [2024-12-15 13:34:31.888084] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1030cc0) on tqpair=0xfe4510 00:20:26.358 [2024-12-15 13:34:31.888093] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.358 [2024-12-15 13:34:31.888097] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.358 [2024-12-15 13:34:31.888100] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4510) 00:20:26.358 [2024-12-15 13:34:31.888107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.358 [2024-12-15 13:34:31.888145] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1030cc0, cid 3, qid 0 00:20:26.358 [2024-12-15 13:34:31.888211] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.358 [2024-12-15 13:34:31.888218] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.358 [2024-12-15 13:34:31.888221] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.358 [2024-12-15 13:34:31.888225] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1030cc0) on tqpair=0xfe4510 00:20:26.358 [2024-12-15 13:34:31.888231] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:26.358 [2024-12-15 13:34:31.888235] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:26.358 [2024-12-15 13:34:31.888245] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.358 [2024-12-15 13:34:31.888249] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.358 [2024-12-15 13:34:31.888253] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4510) 00:20:26.359 [2024-12-15 13:34:31.888260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.359 [2024-12-15 13:34:31.888277] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1030cc0, cid 3, qid 0 00:20:26.359 [2024-12-15 13:34:31.888329] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.359 [2024-12-15 13:34:31.888336] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.359 [2024-12-15 13:34:31.888339] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.359 [2024-12-15 13:34:31.888343] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1030cc0) on tqpair=0xfe4510 00:20:26.359 [2024-12-15 13:34:31.888354] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.359 [2024-12-15 13:34:31.888358] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.359 [2024-12-15 13:34:31.888362] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4510) 00:20:26.359 [2024-12-15 13:34:31.888369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.359 [2024-12-15 13:34:31.888385] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1030cc0, cid 3, qid 0 00:20:26.359 [2024-12-15 13:34:31.888446] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.359 [2024-12-15 13:34:31.888452] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.359 [2024-12-15 13:34:31.888456] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.359 [2024-12-15 13:34:31.888460] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1030cc0) on tqpair=0xfe4510 00:20:26.359 [2024-12-15 13:34:31.888470] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.359 [2024-12-15 13:34:31.888474] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.359 [2024-12-15 13:34:31.888478] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4510) 00:20:26.359 [2024-12-15 13:34:31.888485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.359 [2024-12-15 13:34:31.888501] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1030cc0, cid 3, qid 0 00:20:26.359 [2024-12-15 13:34:31.888553] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.359 [2024-12-15 13:34:31.888559] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.359 [2024-12-15 13:34:31.888563] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.359 [2024-12-15 13:34:31.888567] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1030cc0) on tqpair=0xfe4510 00:20:26.359 [2024-12-15 13:34:31.888577] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.359 [2024-12-15 13:34:31.888581] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.359 [2024-12-15 13:34:31.888585] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4510) 00:20:26.359 [2024-12-15 13:34:31.888592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.359 [2024-12-15 13:34:31.888641] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1030cc0, cid 3, qid 0 00:20:26.359 [2024-12-15 13:34:31.888705] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.359 [2024-12-15 13:34:31.888714] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.359 [2024-12-15 13:34:31.888717] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.359 [2024-12-15 13:34:31.888721] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1030cc0) on tqpair=0xfe4510 00:20:26.359 [2024-12-15 13:34:31.888733] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.359 [2024-12-15 13:34:31.888738] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.359 [2024-12-15 13:34:31.888742] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4510) 00:20:26.359 [2024-12-15 13:34:31.888749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.359 [2024-12-15 13:34:31.888769] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1030cc0, cid 3, qid 0 00:20:26.359 [2024-12-15 13:34:31.888827] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.359 [2024-12-15 13:34:31.888833] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.359 [2024-12-15 13:34:31.888837] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.359 [2024-12-15 13:34:31.888841] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1030cc0) on tqpair=0xfe4510 00:20:26.359 [2024-12-15 13:34:31.888852] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.359 [2024-12-15 13:34:31.888857] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.359 [2024-12-15 13:34:31.888861] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4510) 00:20:26.359 [2024-12-15 13:34:31.888868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.359 [2024-12-15 13:34:31.888886] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1030cc0, cid 3, qid 0 00:20:26.359 [2024-12-15 13:34:31.888939] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.359 [2024-12-15 13:34:31.888945] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.359 [2024-12-15 13:34:31.888949] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.359 [2024-12-15 13:34:31.888953] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1030cc0) on tqpair=0xfe4510 00:20:26.359 [2024-12-15 13:34:31.888979] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.359 [2024-12-15 13:34:31.888983] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.359 [2024-12-15 13:34:31.889002] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4510) 00:20:26.359 [2024-12-15 13:34:31.889009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.359 [2024-12-15 13:34:31.889025] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1030cc0, cid 3, qid 0 00:20:26.359 [2024-12-15 13:34:31.889078] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.359 [2024-12-15 13:34:31.889084] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.359 [2024-12-15 13:34:31.889087] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.359 [2024-12-15 13:34:31.889091] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1030cc0) on tqpair=0xfe4510 00:20:26.359 [2024-12-15 13:34:31.889102] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.359 [2024-12-15 13:34:31.889106] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.359 [2024-12-15 13:34:31.889109] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4510) 00:20:26.359 [2024-12-15 13:34:31.889117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.359 [2024-12-15 13:34:31.889133] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1030cc0, cid 3, qid 0 00:20:26.359 [2024-12-15 13:34:31.889189] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.359 [2024-12-15 13:34:31.889195] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.359 [2024-12-15 13:34:31.889198] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.359 [2024-12-15 13:34:31.889202] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1030cc0) on tqpair=0xfe4510 00:20:26.359 [2024-12-15 13:34:31.889213] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.359 [2024-12-15 13:34:31.889217] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.359 [2024-12-15 13:34:31.889221] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4510) 00:20:26.359 [2024-12-15 13:34:31.889228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.359 [2024-12-15 13:34:31.889244] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1030cc0, cid 3, qid 0 00:20:26.359 [2024-12-15 13:34:31.889296] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.359 [2024-12-15 13:34:31.889302] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.359 [2024-12-15 13:34:31.889305] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.359 [2024-12-15 13:34:31.889309] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1030cc0) on tqpair=0xfe4510 00:20:26.359 [2024-12-15 13:34:31.889320] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.359 [2024-12-15 13:34:31.889324] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.359 [2024-12-15 13:34:31.889328] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4510) 00:20:26.359 [2024-12-15 13:34:31.889335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.359 [2024-12-15 13:34:31.889351] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1030cc0, cid 3, qid 0 00:20:26.359 [2024-12-15 13:34:31.889403] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.359 [2024-12-15 13:34:31.889409] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.359 [2024-12-15 13:34:31.889412] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.359 [2024-12-15 13:34:31.889416] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1030cc0) on tqpair=0xfe4510 00:20:26.359 [2024-12-15 13:34:31.889427] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.359 [2024-12-15 13:34:31.889431] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.359 [2024-12-15 13:34:31.889435] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4510) 00:20:26.359 [2024-12-15 13:34:31.889442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.359 [2024-12-15 13:34:31.889458] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1030cc0, cid 3, qid 0 00:20:26.359 [2024-12-15 13:34:31.889551] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.359 [2024-12-15 13:34:31.889559] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.359 [2024-12-15 13:34:31.889563] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.360 [2024-12-15 13:34:31.889567] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1030cc0) on tqpair=0xfe4510 00:20:26.360 [2024-12-15 13:34:31.889578] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.360 [2024-12-15 13:34:31.889583] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.360 [2024-12-15 13:34:31.889587] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4510) 00:20:26.360 [2024-12-15 13:34:31.889594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.360 [2024-12-15 13:34:31.889625] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1030cc0, cid 3, qid 0 00:20:26.360 [2024-12-15 13:34:31.889682] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.360 [2024-12-15 13:34:31.889689] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.360 [2024-12-15 13:34:31.889693] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.360 [2024-12-15 13:34:31.889697] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1030cc0) on tqpair=0xfe4510 00:20:26.360 [2024-12-15 13:34:31.889708] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.360 [2024-12-15 13:34:31.889712] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.360 [2024-12-15 13:34:31.889716] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4510) 00:20:26.360 [2024-12-15 13:34:31.889724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.360 [2024-12-15 13:34:31.889742] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1030cc0, cid 3, qid 0 00:20:26.360 [2024-12-15 13:34:31.889795] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.360 [2024-12-15 13:34:31.889807] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.360 [2024-12-15 13:34:31.889811] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.360 [2024-12-15 13:34:31.889815] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1030cc0) on tqpair=0xfe4510 00:20:26.360 [2024-12-15 13:34:31.889827] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.360 [2024-12-15 13:34:31.889832] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.360 [2024-12-15 13:34:31.889836] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4510) 00:20:26.360 [2024-12-15 13:34:31.889843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.360 [2024-12-15 13:34:31.889861] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1030cc0, cid 3, qid 0 00:20:26.360 [2024-12-15 13:34:31.889933] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.360 [2024-12-15 13:34:31.889959] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.360 [2024-12-15 13:34:31.889962] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.360 [2024-12-15 13:34:31.889966] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1030cc0) on tqpair=0xfe4510 00:20:26.360 [2024-12-15 13:34:31.889977] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.360 [2024-12-15 13:34:31.889982] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.360 [2024-12-15 13:34:31.889985] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4510) 00:20:26.360 [2024-12-15 13:34:31.889992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.360 [2024-12-15 13:34:31.890009] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1030cc0, cid 3, qid 0 00:20:26.360 [2024-12-15 13:34:31.890059] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.360 [2024-12-15 13:34:31.890065] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.360 [2024-12-15 13:34:31.890069] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.360 [2024-12-15 13:34:31.890073] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1030cc0) on tqpair=0xfe4510 00:20:26.360 [2024-12-15 13:34:31.890083] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.360 [2024-12-15 13:34:31.890088] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.360 [2024-12-15 13:34:31.890091] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4510) 00:20:26.360 [2024-12-15 13:34:31.890098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.360 [2024-12-15 13:34:31.890115] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1030cc0, cid 3, qid 0 00:20:26.360 [2024-12-15 13:34:31.890165] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.360 [2024-12-15 13:34:31.890171] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.360 [2024-12-15 13:34:31.890175] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.360 [2024-12-15 13:34:31.890178] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1030cc0) on tqpair=0xfe4510 00:20:26.360 [2024-12-15 13:34:31.890189] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.360 [2024-12-15 13:34:31.890194] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.360 [2024-12-15 13:34:31.890197] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4510) 00:20:26.360 [2024-12-15 13:34:31.890204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.360 [2024-12-15 13:34:31.890220] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1030cc0, cid 3, qid 0 00:20:26.360 [2024-12-15 13:34:31.890271] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.360 [2024-12-15 13:34:31.890293] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.360 [2024-12-15 13:34:31.890298] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.360 [2024-12-15 13:34:31.890302] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1030cc0) on tqpair=0xfe4510 00:20:26.360 [2024-12-15 13:34:31.890313] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.360 [2024-12-15 13:34:31.890318] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.360 [2024-12-15 13:34:31.890321] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4510) 00:20:26.360 [2024-12-15 13:34:31.890329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.360 [2024-12-15 13:34:31.890347] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1030cc0, cid 3, qid 0 00:20:26.360 [2024-12-15 13:34:31.890400] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.360 [2024-12-15 13:34:31.890407] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.360 [2024-12-15 13:34:31.890411] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.360 [2024-12-15 13:34:31.890415] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1030cc0) on tqpair=0xfe4510 00:20:26.360 [2024-12-15 13:34:31.890426] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.360 [2024-12-15 13:34:31.890430] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.360 [2024-12-15 13:34:31.890434] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4510) 00:20:26.360 [2024-12-15 13:34:31.890441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.360 [2024-12-15 13:34:31.890457] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1030cc0, cid 3, qid 0 00:20:26.360 [2024-12-15 13:34:31.890511] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.360 [2024-12-15 13:34:31.890521] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.360 [2024-12-15 13:34:31.890525] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.360 [2024-12-15 13:34:31.890529] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1030cc0) on tqpair=0xfe4510 00:20:26.360 [2024-12-15 13:34:31.890541] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.360 [2024-12-15 13:34:31.890545] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.360 [2024-12-15 13:34:31.890549] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4510) 00:20:26.360 [2024-12-15 13:34:31.890556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.360 [2024-12-15 13:34:31.890573] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1030cc0, cid 3, qid 0 00:20:26.360 [2024-12-15 13:34:31.894677] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.361 [2024-12-15 13:34:31.894696] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.361 [2024-12-15 13:34:31.894717] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.361 [2024-12-15 13:34:31.894721] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1030cc0) on tqpair=0xfe4510 00:20:26.361 [2024-12-15 13:34:31.894735] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.361 [2024-12-15 13:34:31.894740] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.361 [2024-12-15 13:34:31.894744] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfe4510) 00:20:26.361 [2024-12-15 13:34:31.894752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.361 [2024-12-15 13:34:31.894777] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1030cc0, cid 3, qid 0 00:20:26.361 [2024-12-15 13:34:31.894836] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.361 [2024-12-15 13:34:31.894842] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.361 [2024-12-15 13:34:31.894845] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.361 [2024-12-15 13:34:31.894849] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1030cc0) on tqpair=0xfe4510 00:20:26.361 [2024-12-15 13:34:31.894858] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:20:26.361 00:20:26.361 13:34:31 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:26.361 [2024-12-15 13:34:31.930124] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:26.361 [2024-12-15 13:34:31.930177] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93544 ] 00:20:26.625 [2024-12-15 13:34:32.064948] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:26.625 [2024-12-15 13:34:32.064996] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:26.625 [2024-12-15 13:34:32.065003] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:26.625 [2024-12-15 13:34:32.065011] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:26.625 [2024-12-15 13:34:32.065018] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:26.625 [2024-12-15 13:34:32.065116] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:26.625 [2024-12-15 13:34:32.065156] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x15a4510 0 00:20:26.625 [2024-12-15 13:34:32.070631] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:26.625 [2024-12-15 13:34:32.070653] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:26.625 [2024-12-15 13:34:32.070673] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:26.625 [2024-12-15 13:34:32.070677] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:26.625 [2024-12-15 13:34:32.070712] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.625 [2024-12-15 13:34:32.070718] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.625 [2024-12-15 13:34:32.070721] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15a4510) 00:20:26.625 [2024-12-15 13:34:32.070731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:26.625 [2024-12-15 13:34:32.070759] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f08a0, cid 0, qid 0 00:20:26.625 [2024-12-15 13:34:32.078630] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.625 [2024-12-15 13:34:32.078654] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.625 [2024-12-15 13:34:32.078674] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.625 [2024-12-15 13:34:32.078678] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f08a0) on tqpair=0x15a4510 00:20:26.625 [2024-12-15 13:34:32.078692] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:26.625 [2024-12-15 13:34:32.078698] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:26.625 [2024-12-15 13:34:32.078704] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:26.625 [2024-12-15 13:34:32.078717] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.625 [2024-12-15 13:34:32.078722] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.625 [2024-12-15 13:34:32.078725] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15a4510) 00:20:26.625 [2024-12-15 13:34:32.078733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.625 [2024-12-15 13:34:32.078761] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f08a0, cid 0, qid 0 00:20:26.625 [2024-12-15 13:34:32.078833] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.625 [2024-12-15 13:34:32.078840] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.625 [2024-12-15 13:34:32.078843] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.625 [2024-12-15 13:34:32.078847] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f08a0) on tqpair=0x15a4510 00:20:26.625 [2024-12-15 13:34:32.078852] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:26.625 [2024-12-15 13:34:32.078860] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:26.625 [2024-12-15 13:34:32.078867] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.625 [2024-12-15 13:34:32.078870] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.625 [2024-12-15 13:34:32.078874] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15a4510) 00:20:26.625 [2024-12-15 13:34:32.078880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.625 [2024-12-15 13:34:32.078898] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f08a0, cid 0, qid 0 00:20:26.625 [2024-12-15 13:34:32.078966] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.625 [2024-12-15 13:34:32.078972] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.625 [2024-12-15 13:34:32.078975] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.625 [2024-12-15 13:34:32.078979] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f08a0) on tqpair=0x15a4510 00:20:26.625 [2024-12-15 13:34:32.078985] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:26.625 [2024-12-15 13:34:32.078993] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:26.625 [2024-12-15 13:34:32.079000] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.625 [2024-12-15 13:34:32.079003] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.625 [2024-12-15 13:34:32.079007] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15a4510) 00:20:26.625 [2024-12-15 13:34:32.079014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.625 [2024-12-15 13:34:32.079031] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f08a0, cid 0, qid 0 00:20:26.625 [2024-12-15 13:34:32.079084] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.625 [2024-12-15 13:34:32.079090] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.625 [2024-12-15 13:34:32.079094] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.625 [2024-12-15 13:34:32.079097] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f08a0) on tqpair=0x15a4510 00:20:26.625 [2024-12-15 13:34:32.079112] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:26.625 [2024-12-15 13:34:32.079121] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.625 [2024-12-15 13:34:32.079126] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.625 [2024-12-15 13:34:32.079129] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15a4510) 00:20:26.625 [2024-12-15 13:34:32.079136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.625 [2024-12-15 13:34:32.079152] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f08a0, cid 0, qid 0 00:20:26.625 [2024-12-15 13:34:32.079208] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.625 [2024-12-15 13:34:32.079214] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.625 [2024-12-15 13:34:32.079217] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.625 [2024-12-15 13:34:32.079221] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f08a0) on tqpair=0x15a4510 00:20:26.625 [2024-12-15 13:34:32.079226] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:26.625 [2024-12-15 13:34:32.079231] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:26.626 [2024-12-15 13:34:32.079238] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:26.626 [2024-12-15 13:34:32.079343] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:26.626 [2024-12-15 13:34:32.079347] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:26.626 [2024-12-15 13:34:32.079354] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.626 [2024-12-15 13:34:32.079358] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.626 [2024-12-15 13:34:32.079362] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15a4510) 00:20:26.626 [2024-12-15 13:34:32.079368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.626 [2024-12-15 13:34:32.079386] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f08a0, cid 0, qid 0 00:20:26.626 [2024-12-15 13:34:32.079436] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.626 [2024-12-15 13:34:32.079442] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.626 [2024-12-15 13:34:32.079445] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.626 [2024-12-15 13:34:32.079449] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f08a0) on tqpair=0x15a4510 00:20:26.626 [2024-12-15 13:34:32.079454] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:26.626 [2024-12-15 13:34:32.079464] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.626 [2024-12-15 13:34:32.079468] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.626 [2024-12-15 13:34:32.079471] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15a4510) 00:20:26.626 [2024-12-15 13:34:32.079478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.626 [2024-12-15 13:34:32.079494] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f08a0, cid 0, qid 0 00:20:26.626 [2024-12-15 13:34:32.079546] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.626 [2024-12-15 13:34:32.079552] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.626 [2024-12-15 13:34:32.079556] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.626 [2024-12-15 13:34:32.079559] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f08a0) on tqpair=0x15a4510 00:20:26.626 [2024-12-15 13:34:32.079564] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:26.626 [2024-12-15 13:34:32.079569] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:26.626 [2024-12-15 13:34:32.079577] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:26.626 [2024-12-15 13:34:32.079590] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:26.626 [2024-12-15 13:34:32.079598] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.626 [2024-12-15 13:34:32.079602] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.626 [2024-12-15 13:34:32.079606] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15a4510) 00:20:26.626 [2024-12-15 13:34:32.079613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.626 [2024-12-15 13:34:32.079661] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f08a0, cid 0, qid 0 00:20:26.626 [2024-12-15 13:34:32.079752] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:26.626 [2024-12-15 13:34:32.079759] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:26.626 [2024-12-15 13:34:32.079763] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:26.626 [2024-12-15 13:34:32.079766] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15a4510): datao=0, datal=4096, cccid=0 00:20:26.626 [2024-12-15 13:34:32.079771] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15f08a0) on tqpair(0x15a4510): expected_datao=0, payload_size=4096 00:20:26.626 [2024-12-15 13:34:32.079778] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:26.626 [2024-12-15 13:34:32.079782] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:26.626 [2024-12-15 13:34:32.079790] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.626 [2024-12-15 13:34:32.079796] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.626 [2024-12-15 13:34:32.079799] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.626 [2024-12-15 13:34:32.079803] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f08a0) on tqpair=0x15a4510 00:20:26.626 [2024-12-15 13:34:32.079811] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:26.626 [2024-12-15 13:34:32.079816] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:26.626 [2024-12-15 13:34:32.079820] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:26.626 [2024-12-15 13:34:32.079824] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:26.626 [2024-12-15 13:34:32.079829] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:26.626 [2024-12-15 13:34:32.079833] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:26.626 [2024-12-15 13:34:32.079846] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:26.626 [2024-12-15 13:34:32.079854] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.626 [2024-12-15 13:34:32.079858] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.626 [2024-12-15 13:34:32.079862] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15a4510) 00:20:26.626 [2024-12-15 13:34:32.079869] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:26.626 [2024-12-15 13:34:32.079888] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f08a0, cid 0, qid 0 00:20:26.626 [2024-12-15 13:34:32.079954] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.626 [2024-12-15 13:34:32.079961] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.626 [2024-12-15 13:34:32.079964] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.626 [2024-12-15 13:34:32.079968] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f08a0) on tqpair=0x15a4510 00:20:26.626 [2024-12-15 13:34:32.079976] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.626 [2024-12-15 13:34:32.079980] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.626 [2024-12-15 13:34:32.079983] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15a4510) 00:20:26.626 [2024-12-15 13:34:32.079990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:26.626 [2024-12-15 13:34:32.079996] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.626 [2024-12-15 13:34:32.079999] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.626 [2024-12-15 13:34:32.080003] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x15a4510) 00:20:26.626 [2024-12-15 13:34:32.080008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:26.626 [2024-12-15 13:34:32.080014] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.626 [2024-12-15 13:34:32.080017] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.626 [2024-12-15 13:34:32.080021] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x15a4510) 00:20:26.626 [2024-12-15 13:34:32.080026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:26.626 [2024-12-15 13:34:32.080032] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.626 [2024-12-15 13:34:32.080035] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.626 [2024-12-15 13:34:32.080038] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.626 [2024-12-15 13:34:32.080044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:26.626 [2024-12-15 13:34:32.080049] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:26.626 [2024-12-15 13:34:32.080061] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:26.626 [2024-12-15 13:34:32.080068] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.626 [2024-12-15 13:34:32.080072] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.626 [2024-12-15 13:34:32.080076] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15a4510) 00:20:26.626 [2024-12-15 13:34:32.080083] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.626 [2024-12-15 13:34:32.080103] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f08a0, cid 0, qid 0 00:20:26.626 [2024-12-15 13:34:32.080110] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0a00, cid 1, qid 0 00:20:26.626 [2024-12-15 13:34:32.080114] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0b60, cid 2, qid 0 00:20:26.626 [2024-12-15 13:34:32.080119] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.626 [2024-12-15 13:34:32.080123] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0e20, cid 4, qid 0 00:20:26.626 [2024-12-15 13:34:32.080213] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.627 [2024-12-15 13:34:32.080219] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.627 [2024-12-15 13:34:32.080223] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.627 [2024-12-15 13:34:32.080226] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0e20) on tqpair=0x15a4510 00:20:26.627 [2024-12-15 13:34:32.080232] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:26.627 [2024-12-15 13:34:32.080237] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:26.627 [2024-12-15 13:34:32.080246] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:26.627 [2024-12-15 13:34:32.080256] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:26.627 [2024-12-15 13:34:32.080263] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.627 [2024-12-15 13:34:32.080267] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.627 [2024-12-15 13:34:32.080270] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15a4510) 00:20:26.627 [2024-12-15 13:34:32.080277] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:26.627 [2024-12-15 13:34:32.080296] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0e20, cid 4, qid 0 00:20:26.627 [2024-12-15 13:34:32.080356] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.627 [2024-12-15 13:34:32.080362] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.627 [2024-12-15 13:34:32.080366] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.627 [2024-12-15 13:34:32.080369] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0e20) on tqpair=0x15a4510 00:20:26.627 [2024-12-15 13:34:32.080425] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:26.627 [2024-12-15 13:34:32.080435] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:26.627 [2024-12-15 13:34:32.080443] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.627 [2024-12-15 13:34:32.080446] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.627 [2024-12-15 13:34:32.080450] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15a4510) 00:20:26.627 [2024-12-15 13:34:32.080457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.627 [2024-12-15 13:34:32.080475] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0e20, cid 4, qid 0 00:20:26.627 [2024-12-15 13:34:32.080553] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:26.627 [2024-12-15 13:34:32.080568] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:26.627 [2024-12-15 13:34:32.080573] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:26.627 [2024-12-15 13:34:32.080576] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15a4510): datao=0, datal=4096, cccid=4 00:20:26.627 [2024-12-15 13:34:32.080581] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15f0e20) on tqpair(0x15a4510): expected_datao=0, payload_size=4096 00:20:26.627 [2024-12-15 13:34:32.080597] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:26.627 [2024-12-15 13:34:32.080603] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:26.627 [2024-12-15 13:34:32.080611] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.627 [2024-12-15 13:34:32.080617] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.627 [2024-12-15 13:34:32.080620] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.627 [2024-12-15 13:34:32.080624] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0e20) on tqpair=0x15a4510 00:20:26.627 [2024-12-15 13:34:32.080638] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:26.627 [2024-12-15 13:34:32.080648] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:26.627 [2024-12-15 13:34:32.080658] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:26.627 [2024-12-15 13:34:32.080665] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.627 [2024-12-15 13:34:32.080669] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.627 [2024-12-15 13:34:32.080673] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15a4510) 00:20:26.627 [2024-12-15 13:34:32.080679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.627 [2024-12-15 13:34:32.080700] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0e20, cid 4, qid 0 00:20:26.627 [2024-12-15 13:34:32.080790] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:26.627 [2024-12-15 13:34:32.080796] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:26.627 [2024-12-15 13:34:32.080800] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:26.627 [2024-12-15 13:34:32.080803] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15a4510): datao=0, datal=4096, cccid=4 00:20:26.627 [2024-12-15 13:34:32.080808] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15f0e20) on tqpair(0x15a4510): expected_datao=0, payload_size=4096 00:20:26.627 [2024-12-15 13:34:32.080815] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:26.627 [2024-12-15 13:34:32.080819] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:26.627 [2024-12-15 13:34:32.080826] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.627 [2024-12-15 13:34:32.080838] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.627 [2024-12-15 13:34:32.080842] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.627 [2024-12-15 13:34:32.080845] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0e20) on tqpair=0x15a4510 00:20:26.627 [2024-12-15 13:34:32.080860] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:26.627 [2024-12-15 13:34:32.080871] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:26.627 [2024-12-15 13:34:32.080879] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.627 [2024-12-15 13:34:32.080883] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.627 [2024-12-15 13:34:32.080886] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15a4510) 00:20:26.627 [2024-12-15 13:34:32.080893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.627 [2024-12-15 13:34:32.080912] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0e20, cid 4, qid 0 00:20:26.627 [2024-12-15 13:34:32.080981] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:26.627 [2024-12-15 13:34:32.080988] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:26.627 [2024-12-15 13:34:32.080991] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:26.627 [2024-12-15 13:34:32.080995] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15a4510): datao=0, datal=4096, cccid=4 00:20:26.627 [2024-12-15 13:34:32.080999] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15f0e20) on tqpair(0x15a4510): expected_datao=0, payload_size=4096 00:20:26.627 [2024-12-15 13:34:32.081006] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:26.627 [2024-12-15 13:34:32.081010] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:26.627 [2024-12-15 13:34:32.081018] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.627 [2024-12-15 13:34:32.081023] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.627 [2024-12-15 13:34:32.081027] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.627 [2024-12-15 13:34:32.081031] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0e20) on tqpair=0x15a4510 00:20:26.627 [2024-12-15 13:34:32.081039] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:26.627 [2024-12-15 13:34:32.081048] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:26.627 [2024-12-15 13:34:32.081057] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:26.627 [2024-12-15 13:34:32.081064] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:26.627 [2024-12-15 13:34:32.081069] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:26.627 [2024-12-15 13:34:32.081074] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:26.627 [2024-12-15 13:34:32.081078] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:26.627 [2024-12-15 13:34:32.081083] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:26.627 [2024-12-15 13:34:32.081097] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.627 [2024-12-15 13:34:32.081101] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.627 [2024-12-15 13:34:32.081105] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15a4510) 00:20:26.627 [2024-12-15 13:34:32.081111] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.627 [2024-12-15 13:34:32.081118] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.627 [2024-12-15 13:34:32.081122] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.627 [2024-12-15 13:34:32.081125] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15a4510) 00:20:26.628 [2024-12-15 13:34:32.081131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:26.628 [2024-12-15 13:34:32.081154] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0e20, cid 4, qid 0 00:20:26.628 [2024-12-15 13:34:32.081161] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0f80, cid 5, qid 0 00:20:26.628 [2024-12-15 13:34:32.081225] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.628 [2024-12-15 13:34:32.081231] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.628 [2024-12-15 13:34:32.081235] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.628 [2024-12-15 13:34:32.081239] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0e20) on tqpair=0x15a4510 00:20:26.628 [2024-12-15 13:34:32.081246] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.628 [2024-12-15 13:34:32.081252] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.628 [2024-12-15 13:34:32.081255] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.628 [2024-12-15 13:34:32.081258] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0f80) on tqpair=0x15a4510 00:20:26.628 [2024-12-15 13:34:32.081269] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.628 [2024-12-15 13:34:32.081273] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.628 [2024-12-15 13:34:32.081276] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15a4510) 00:20:26.628 [2024-12-15 13:34:32.081283] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.628 [2024-12-15 13:34:32.081302] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0f80, cid 5, qid 0 00:20:26.628 [2024-12-15 13:34:32.081363] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.628 [2024-12-15 13:34:32.081369] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.628 [2024-12-15 13:34:32.081373] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.628 [2024-12-15 13:34:32.081377] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0f80) on tqpair=0x15a4510 00:20:26.628 [2024-12-15 13:34:32.081387] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.628 [2024-12-15 13:34:32.081391] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.628 [2024-12-15 13:34:32.081395] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15a4510) 00:20:26.628 [2024-12-15 13:34:32.081402] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.628 [2024-12-15 13:34:32.081418] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0f80, cid 5, qid 0 00:20:26.628 [2024-12-15 13:34:32.081472] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.628 [2024-12-15 13:34:32.081478] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.628 [2024-12-15 13:34:32.081481] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.628 [2024-12-15 13:34:32.081485] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0f80) on tqpair=0x15a4510 00:20:26.628 [2024-12-15 13:34:32.081495] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.628 [2024-12-15 13:34:32.081525] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.628 [2024-12-15 13:34:32.081529] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15a4510) 00:20:26.628 [2024-12-15 13:34:32.081536] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.628 [2024-12-15 13:34:32.081555] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0f80, cid 5, qid 0 00:20:26.628 [2024-12-15 13:34:32.081626] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.628 [2024-12-15 13:34:32.081634] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.628 [2024-12-15 13:34:32.081638] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.628 [2024-12-15 13:34:32.081642] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0f80) on tqpair=0x15a4510 00:20:26.628 [2024-12-15 13:34:32.081656] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.628 [2024-12-15 13:34:32.081661] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.628 [2024-12-15 13:34:32.081664] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15a4510) 00:20:26.628 [2024-12-15 13:34:32.081672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.628 [2024-12-15 13:34:32.081679] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.628 [2024-12-15 13:34:32.081683] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.628 [2024-12-15 13:34:32.081686] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15a4510) 00:20:26.628 [2024-12-15 13:34:32.081693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.628 [2024-12-15 13:34:32.081699] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.628 [2024-12-15 13:34:32.081703] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.628 [2024-12-15 13:34:32.081707] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x15a4510) 00:20:26.628 [2024-12-15 13:34:32.081713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.628 [2024-12-15 13:34:32.081720] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.628 [2024-12-15 13:34:32.081724] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.628 [2024-12-15 13:34:32.081728] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x15a4510) 00:20:26.628 [2024-12-15 13:34:32.081734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.628 [2024-12-15 13:34:32.081755] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0f80, cid 5, qid 0 00:20:26.628 [2024-12-15 13:34:32.081762] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0e20, cid 4, qid 0 00:20:26.628 [2024-12-15 13:34:32.081768] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f10e0, cid 6, qid 0 00:20:26.628 [2024-12-15 13:34:32.081772] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1240, cid 7, qid 0 00:20:26.628 [2024-12-15 13:34:32.081941] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:26.628 [2024-12-15 13:34:32.081948] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:26.628 [2024-12-15 13:34:32.081951] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:26.628 [2024-12-15 13:34:32.081955] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15a4510): datao=0, datal=8192, cccid=5 00:20:26.628 [2024-12-15 13:34:32.081959] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15f0f80) on tqpair(0x15a4510): expected_datao=0, payload_size=8192 00:20:26.628 [2024-12-15 13:34:32.081975] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:26.628 [2024-12-15 13:34:32.081979] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:26.628 [2024-12-15 13:34:32.081984] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:26.628 [2024-12-15 13:34:32.081990] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:26.628 [2024-12-15 13:34:32.081993] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:26.628 [2024-12-15 13:34:32.081997] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15a4510): datao=0, datal=512, cccid=4 00:20:26.628 [2024-12-15 13:34:32.082001] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15f0e20) on tqpair(0x15a4510): expected_datao=0, payload_size=512 00:20:26.628 [2024-12-15 13:34:32.082008] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:26.628 [2024-12-15 13:34:32.082011] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:26.628 [2024-12-15 13:34:32.082017] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:26.628 [2024-12-15 13:34:32.082022] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:26.628 [2024-12-15 13:34:32.082025] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:26.628 [2024-12-15 13:34:32.082029] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15a4510): datao=0, datal=512, cccid=6 00:20:26.628 [2024-12-15 13:34:32.082033] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15f10e0) on tqpair(0x15a4510): expected_datao=0, payload_size=512 00:20:26.628 [2024-12-15 13:34:32.082040] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:26.628 [2024-12-15 13:34:32.082043] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:26.628 [2024-12-15 13:34:32.082048] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:26.628 [2024-12-15 13:34:32.082054] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:26.628 [2024-12-15 13:34:32.082057] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:26.628 [2024-12-15 13:34:32.082061] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15a4510): datao=0, datal=4096, cccid=7 00:20:26.628 [2024-12-15 13:34:32.082065] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15f1240) on tqpair(0x15a4510): expected_datao=0, payload_size=4096 00:20:26.628 [2024-12-15 13:34:32.082072] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:26.628 [2024-12-15 13:34:32.082075] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:26.628 [2024-12-15 13:34:32.082084] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.628 [2024-12-15 13:34:32.082089] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.628 [2024-12-15 13:34:32.082093] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.628 [2024-12-15 13:34:32.082096] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0f80) on tqpair=0x15a4510 00:20:26.628 [2024-12-15 13:34:32.082111] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.628 [2024-12-15 13:34:32.082118] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.628 [2024-12-15 13:34:32.082121] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.629 [2024-12-15 13:34:32.082125] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0e20) on tqpair=0x15a4510 00:20:26.629 [2024-12-15 13:34:32.082135] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.629 [2024-12-15 13:34:32.082140] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.629 [2024-12-15 13:34:32.082144] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.629 [2024-12-15 13:34:32.082147] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f10e0) on tqpair=0x15a4510 00:20:26.629 [2024-12-15 13:34:32.082155] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.629 [2024-12-15 13:34:32.082160] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.629 [2024-12-15 13:34:32.082164] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.629 [2024-12-15 13:34:32.082167] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f1240) on tqpair=0x15a4510 00:20:26.629 ===================================================== 00:20:26.629 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:26.629 ===================================================== 00:20:26.629 Controller Capabilities/Features 00:20:26.629 ================================ 00:20:26.629 Vendor ID: 8086 00:20:26.629 Subsystem Vendor ID: 8086 00:20:26.629 Serial Number: SPDK00000000000001 00:20:26.629 Model Number: SPDK bdev Controller 00:20:26.629 Firmware Version: 24.01.1 00:20:26.629 Recommended Arb Burst: 6 00:20:26.629 IEEE OUI Identifier: e4 d2 5c 00:20:26.629 Multi-path I/O 00:20:26.629 May have multiple subsystem ports: Yes 00:20:26.629 May have multiple controllers: Yes 00:20:26.629 Associated with SR-IOV VF: No 00:20:26.629 Max Data Transfer Size: 131072 00:20:26.629 Max Number of Namespaces: 32 00:20:26.629 Max Number of I/O Queues: 127 00:20:26.629 NVMe Specification Version (VS): 1.3 00:20:26.629 NVMe Specification Version (Identify): 1.3 00:20:26.629 Maximum Queue Entries: 128 00:20:26.629 Contiguous Queues Required: Yes 00:20:26.629 Arbitration Mechanisms Supported 00:20:26.629 Weighted Round Robin: Not Supported 00:20:26.629 Vendor Specific: Not Supported 00:20:26.629 Reset Timeout: 15000 ms 00:20:26.629 Doorbell Stride: 4 bytes 00:20:26.629 NVM Subsystem Reset: Not Supported 00:20:26.629 Command Sets Supported 00:20:26.629 NVM Command Set: Supported 00:20:26.629 Boot Partition: Not Supported 00:20:26.629 Memory Page Size Minimum: 4096 bytes 00:20:26.629 Memory Page Size Maximum: 4096 bytes 00:20:26.629 Persistent Memory Region: Not Supported 00:20:26.629 Optional Asynchronous Events Supported 00:20:26.629 Namespace Attribute Notices: Supported 00:20:26.629 Firmware Activation Notices: Not Supported 00:20:26.629 ANA Change Notices: Not Supported 00:20:26.629 PLE Aggregate Log Change Notices: Not Supported 00:20:26.629 LBA Status Info Alert Notices: Not Supported 00:20:26.629 EGE Aggregate Log Change Notices: Not Supported 00:20:26.629 Normal NVM Subsystem Shutdown event: Not Supported 00:20:26.629 Zone Descriptor Change Notices: Not Supported 00:20:26.629 Discovery Log Change Notices: Not Supported 00:20:26.629 Controller Attributes 00:20:26.629 128-bit Host Identifier: Supported 00:20:26.629 Non-Operational Permissive Mode: Not Supported 00:20:26.629 NVM Sets: Not Supported 00:20:26.629 Read Recovery Levels: Not Supported 00:20:26.629 Endurance Groups: Not Supported 00:20:26.629 Predictable Latency Mode: Not Supported 00:20:26.629 Traffic Based Keep ALive: Not Supported 00:20:26.629 Namespace Granularity: Not Supported 00:20:26.629 SQ Associations: Not Supported 00:20:26.629 UUID List: Not Supported 00:20:26.629 Multi-Domain Subsystem: Not Supported 00:20:26.629 Fixed Capacity Management: Not Supported 00:20:26.629 Variable Capacity Management: Not Supported 00:20:26.629 Delete Endurance Group: Not Supported 00:20:26.629 Delete NVM Set: Not Supported 00:20:26.629 Extended LBA Formats Supported: Not Supported 00:20:26.629 Flexible Data Placement Supported: Not Supported 00:20:26.629 00:20:26.629 Controller Memory Buffer Support 00:20:26.629 ================================ 00:20:26.629 Supported: No 00:20:26.629 00:20:26.629 Persistent Memory Region Support 00:20:26.629 ================================ 00:20:26.629 Supported: No 00:20:26.629 00:20:26.629 Admin Command Set Attributes 00:20:26.629 ============================ 00:20:26.629 Security Send/Receive: Not Supported 00:20:26.629 Format NVM: Not Supported 00:20:26.629 Firmware Activate/Download: Not Supported 00:20:26.629 Namespace Management: Not Supported 00:20:26.629 Device Self-Test: Not Supported 00:20:26.629 Directives: Not Supported 00:20:26.629 NVMe-MI: Not Supported 00:20:26.629 Virtualization Management: Not Supported 00:20:26.629 Doorbell Buffer Config: Not Supported 00:20:26.629 Get LBA Status Capability: Not Supported 00:20:26.629 Command & Feature Lockdown Capability: Not Supported 00:20:26.629 Abort Command Limit: 4 00:20:26.629 Async Event Request Limit: 4 00:20:26.629 Number of Firmware Slots: N/A 00:20:26.629 Firmware Slot 1 Read-Only: N/A 00:20:26.629 Firmware Activation Without Reset: N/A 00:20:26.629 Multiple Update Detection Support: N/A 00:20:26.629 Firmware Update Granularity: No Information Provided 00:20:26.629 Per-Namespace SMART Log: No 00:20:26.629 Asymmetric Namespace Access Log Page: Not Supported 00:20:26.629 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:26.629 Command Effects Log Page: Supported 00:20:26.629 Get Log Page Extended Data: Supported 00:20:26.629 Telemetry Log Pages: Not Supported 00:20:26.629 Persistent Event Log Pages: Not Supported 00:20:26.629 Supported Log Pages Log Page: May Support 00:20:26.629 Commands Supported & Effects Log Page: Not Supported 00:20:26.629 Feature Identifiers & Effects Log Page:May Support 00:20:26.629 NVMe-MI Commands & Effects Log Page: May Support 00:20:26.629 Data Area 4 for Telemetry Log: Not Supported 00:20:26.629 Error Log Page Entries Supported: 128 00:20:26.629 Keep Alive: Supported 00:20:26.629 Keep Alive Granularity: 10000 ms 00:20:26.629 00:20:26.629 NVM Command Set Attributes 00:20:26.629 ========================== 00:20:26.629 Submission Queue Entry Size 00:20:26.629 Max: 64 00:20:26.629 Min: 64 00:20:26.629 Completion Queue Entry Size 00:20:26.629 Max: 16 00:20:26.629 Min: 16 00:20:26.629 Number of Namespaces: 32 00:20:26.629 Compare Command: Supported 00:20:26.629 Write Uncorrectable Command: Not Supported 00:20:26.629 Dataset Management Command: Supported 00:20:26.629 Write Zeroes Command: Supported 00:20:26.629 Set Features Save Field: Not Supported 00:20:26.629 Reservations: Supported 00:20:26.629 Timestamp: Not Supported 00:20:26.629 Copy: Supported 00:20:26.629 Volatile Write Cache: Present 00:20:26.629 Atomic Write Unit (Normal): 1 00:20:26.629 Atomic Write Unit (PFail): 1 00:20:26.629 Atomic Compare & Write Unit: 1 00:20:26.629 Fused Compare & Write: Supported 00:20:26.629 Scatter-Gather List 00:20:26.629 SGL Command Set: Supported 00:20:26.629 SGL Keyed: Supported 00:20:26.629 SGL Bit Bucket Descriptor: Not Supported 00:20:26.629 SGL Metadata Pointer: Not Supported 00:20:26.629 Oversized SGL: Not Supported 00:20:26.629 SGL Metadata Address: Not Supported 00:20:26.629 SGL Offset: Supported 00:20:26.629 Transport SGL Data Block: Not Supported 00:20:26.629 Replay Protected Memory Block: Not Supported 00:20:26.629 00:20:26.629 Firmware Slot Information 00:20:26.629 ========================= 00:20:26.629 Active slot: 1 00:20:26.629 Slot 1 Firmware Revision: 24.01.1 00:20:26.629 00:20:26.629 00:20:26.629 Commands Supported and Effects 00:20:26.629 ============================== 00:20:26.629 Admin Commands 00:20:26.629 -------------- 00:20:26.629 Get Log Page (02h): Supported 00:20:26.629 Identify (06h): Supported 00:20:26.629 Abort (08h): Supported 00:20:26.629 Set Features (09h): Supported 00:20:26.629 Get Features (0Ah): Supported 00:20:26.629 Asynchronous Event Request (0Ch): Supported 00:20:26.629 Keep Alive (18h): Supported 00:20:26.629 I/O Commands 00:20:26.629 ------------ 00:20:26.629 Flush (00h): Supported LBA-Change 00:20:26.629 Write (01h): Supported LBA-Change 00:20:26.629 Read (02h): Supported 00:20:26.629 Compare (05h): Supported 00:20:26.629 Write Zeroes (08h): Supported LBA-Change 00:20:26.629 Dataset Management (09h): Supported LBA-Change 00:20:26.629 Copy (19h): Supported LBA-Change 00:20:26.630 Unknown (79h): Supported LBA-Change 00:20:26.630 Unknown (7Ah): Supported 00:20:26.630 00:20:26.630 Error Log 00:20:26.630 ========= 00:20:26.630 00:20:26.630 Arbitration 00:20:26.630 =========== 00:20:26.630 Arbitration Burst: 1 00:20:26.630 00:20:26.630 Power Management 00:20:26.630 ================ 00:20:26.630 Number of Power States: 1 00:20:26.630 Current Power State: Power State #0 00:20:26.630 Power State #0: 00:20:26.630 Max Power: 0.00 W 00:20:26.630 Non-Operational State: Operational 00:20:26.630 Entry Latency: Not Reported 00:20:26.630 Exit Latency: Not Reported 00:20:26.630 Relative Read Throughput: 0 00:20:26.630 Relative Read Latency: 0 00:20:26.630 Relative Write Throughput: 0 00:20:26.630 Relative Write Latency: 0 00:20:26.630 Idle Power: Not Reported 00:20:26.630 Active Power: Not Reported 00:20:26.630 Non-Operational Permissive Mode: Not Supported 00:20:26.630 00:20:26.630 Health Information 00:20:26.630 ================== 00:20:26.630 Critical Warnings: 00:20:26.630 Available Spare Space: OK 00:20:26.630 Temperature: OK 00:20:26.630 Device Reliability: OK 00:20:26.630 Read Only: No 00:20:26.630 Volatile Memory Backup: OK 00:20:26.630 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:26.630 Temperature Threshold: [2024-12-15 13:34:32.082263] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.630 [2024-12-15 13:34:32.082270] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.630 [2024-12-15 13:34:32.082273] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x15a4510) 00:20:26.630 [2024-12-15 13:34:32.082280] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.630 [2024-12-15 13:34:32.082303] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f1240, cid 7, qid 0 00:20:26.630 [2024-12-15 13:34:32.082366] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.630 [2024-12-15 13:34:32.082372] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.630 [2024-12-15 13:34:32.082376] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.630 [2024-12-15 13:34:32.082379] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f1240) on tqpair=0x15a4510 00:20:26.630 [2024-12-15 13:34:32.082427] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:26.630 [2024-12-15 13:34:32.082440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.630 [2024-12-15 13:34:32.082447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.630 [2024-12-15 13:34:32.082453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.630 [2024-12-15 13:34:32.082458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.630 [2024-12-15 13:34:32.082467] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.630 [2024-12-15 13:34:32.082471] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.630 [2024-12-15 13:34:32.082474] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.630 [2024-12-15 13:34:32.082482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.630 [2024-12-15 13:34:32.082504] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.630 [2024-12-15 13:34:32.082560] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.630 [2024-12-15 13:34:32.082566] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.630 [2024-12-15 13:34:32.082570] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.630 [2024-12-15 13:34:32.082573] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.630 [2024-12-15 13:34:32.082581] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.630 [2024-12-15 13:34:32.086631] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.630 [2024-12-15 13:34:32.086655] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.630 [2024-12-15 13:34:32.086663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.630 [2024-12-15 13:34:32.086694] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.630 [2024-12-15 13:34:32.086768] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.630 [2024-12-15 13:34:32.086775] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.630 [2024-12-15 13:34:32.086778] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.630 [2024-12-15 13:34:32.086782] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.630 [2024-12-15 13:34:32.086788] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:26.630 [2024-12-15 13:34:32.086792] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:26.630 [2024-12-15 13:34:32.086802] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.630 [2024-12-15 13:34:32.086806] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.630 [2024-12-15 13:34:32.086810] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.630 [2024-12-15 13:34:32.086816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.630 [2024-12-15 13:34:32.086834] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.630 [2024-12-15 13:34:32.086892] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.630 [2024-12-15 13:34:32.086898] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.630 [2024-12-15 13:34:32.086901] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.630 [2024-12-15 13:34:32.086905] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.630 [2024-12-15 13:34:32.086916] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.630 [2024-12-15 13:34:32.086920] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.630 [2024-12-15 13:34:32.086924] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.630 [2024-12-15 13:34:32.086930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.630 [2024-12-15 13:34:32.086947] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.630 [2024-12-15 13:34:32.086996] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.630 [2024-12-15 13:34:32.087002] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.630 [2024-12-15 13:34:32.087006] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.630 [2024-12-15 13:34:32.087009] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.630 [2024-12-15 13:34:32.087019] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.630 [2024-12-15 13:34:32.087024] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.630 [2024-12-15 13:34:32.087027] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.630 [2024-12-15 13:34:32.087034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.630 [2024-12-15 13:34:32.087050] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.630 [2024-12-15 13:34:32.087098] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.630 [2024-12-15 13:34:32.087104] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.630 [2024-12-15 13:34:32.087107] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.630 [2024-12-15 13:34:32.087111] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.630 [2024-12-15 13:34:32.087121] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.630 [2024-12-15 13:34:32.087125] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.630 [2024-12-15 13:34:32.087128] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.631 [2024-12-15 13:34:32.087135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.631 [2024-12-15 13:34:32.087151] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.631 [2024-12-15 13:34:32.087204] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.631 [2024-12-15 13:34:32.087210] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.631 [2024-12-15 13:34:32.087214] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.631 [2024-12-15 13:34:32.087217] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.631 [2024-12-15 13:34:32.087227] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.631 [2024-12-15 13:34:32.087232] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.631 [2024-12-15 13:34:32.087235] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.631 [2024-12-15 13:34:32.087242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.631 [2024-12-15 13:34:32.087258] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.631 [2024-12-15 13:34:32.087308] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.631 [2024-12-15 13:34:32.087314] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.631 [2024-12-15 13:34:32.087318] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.631 [2024-12-15 13:34:32.087321] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.631 [2024-12-15 13:34:32.087331] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.631 [2024-12-15 13:34:32.087336] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.631 [2024-12-15 13:34:32.087339] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.631 [2024-12-15 13:34:32.087345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.631 [2024-12-15 13:34:32.087363] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.631 [2024-12-15 13:34:32.087413] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.631 [2024-12-15 13:34:32.087420] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.631 [2024-12-15 13:34:32.087423] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.631 [2024-12-15 13:34:32.087426] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.631 [2024-12-15 13:34:32.087437] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.631 [2024-12-15 13:34:32.087441] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.631 [2024-12-15 13:34:32.087444] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.631 [2024-12-15 13:34:32.087451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.631 [2024-12-15 13:34:32.087467] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.631 [2024-12-15 13:34:32.087517] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.631 [2024-12-15 13:34:32.087523] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.631 [2024-12-15 13:34:32.087542] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.631 [2024-12-15 13:34:32.087546] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.631 [2024-12-15 13:34:32.087556] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.631 [2024-12-15 13:34:32.087561] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.631 [2024-12-15 13:34:32.087564] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.631 [2024-12-15 13:34:32.087571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.631 [2024-12-15 13:34:32.087587] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.631 [2024-12-15 13:34:32.087655] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.631 [2024-12-15 13:34:32.087663] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.631 [2024-12-15 13:34:32.087667] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.631 [2024-12-15 13:34:32.087670] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.631 [2024-12-15 13:34:32.087681] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.631 [2024-12-15 13:34:32.087686] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.631 [2024-12-15 13:34:32.087689] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.631 [2024-12-15 13:34:32.087696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.631 [2024-12-15 13:34:32.087715] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.631 [2024-12-15 13:34:32.087767] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.631 [2024-12-15 13:34:32.087773] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.631 [2024-12-15 13:34:32.087777] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.631 [2024-12-15 13:34:32.087781] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.631 [2024-12-15 13:34:32.087791] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.631 [2024-12-15 13:34:32.087795] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.631 [2024-12-15 13:34:32.087799] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.631 [2024-12-15 13:34:32.087806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.631 [2024-12-15 13:34:32.087823] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.631 [2024-12-15 13:34:32.087872] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.631 [2024-12-15 13:34:32.087878] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.631 [2024-12-15 13:34:32.087881] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.631 [2024-12-15 13:34:32.087885] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.631 [2024-12-15 13:34:32.087895] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.631 [2024-12-15 13:34:32.087900] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.631 [2024-12-15 13:34:32.087903] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.631 [2024-12-15 13:34:32.087910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.631 [2024-12-15 13:34:32.087927] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.631 [2024-12-15 13:34:32.087979] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.631 [2024-12-15 13:34:32.087986] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.631 [2024-12-15 13:34:32.087990] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.631 [2024-12-15 13:34:32.087993] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.631 [2024-12-15 13:34:32.088004] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.631 [2024-12-15 13:34:32.088008] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.631 [2024-12-15 13:34:32.088012] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.631 [2024-12-15 13:34:32.088019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.631 [2024-12-15 13:34:32.088035] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.631 [2024-12-15 13:34:32.088090] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.631 [2024-12-15 13:34:32.088097] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.631 [2024-12-15 13:34:32.088100] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.631 [2024-12-15 13:34:32.088104] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.631 [2024-12-15 13:34:32.088114] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.631 [2024-12-15 13:34:32.088119] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.631 [2024-12-15 13:34:32.088122] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.631 [2024-12-15 13:34:32.088129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.631 [2024-12-15 13:34:32.088152] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.631 [2024-12-15 13:34:32.088197] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.631 [2024-12-15 13:34:32.088204] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.631 [2024-12-15 13:34:32.088207] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.631 [2024-12-15 13:34:32.088211] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.631 [2024-12-15 13:34:32.088221] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.631 [2024-12-15 13:34:32.088226] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.631 [2024-12-15 13:34:32.088229] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.631 [2024-12-15 13:34:32.088236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.631 [2024-12-15 13:34:32.088253] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.631 [2024-12-15 13:34:32.088316] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.631 [2024-12-15 13:34:32.088323] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.632 [2024-12-15 13:34:32.088326] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.088330] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.632 [2024-12-15 13:34:32.088340] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.088344] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.088347] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.632 [2024-12-15 13:34:32.088354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.632 [2024-12-15 13:34:32.088370] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.632 [2024-12-15 13:34:32.088435] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.632 [2024-12-15 13:34:32.088441] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.632 [2024-12-15 13:34:32.088444] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.088448] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.632 [2024-12-15 13:34:32.088459] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.088463] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.088467] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.632 [2024-12-15 13:34:32.088473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.632 [2024-12-15 13:34:32.088490] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.632 [2024-12-15 13:34:32.088545] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.632 [2024-12-15 13:34:32.088551] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.632 [2024-12-15 13:34:32.088555] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.088558] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.632 [2024-12-15 13:34:32.088569] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.088573] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.088577] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.632 [2024-12-15 13:34:32.088583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.632 [2024-12-15 13:34:32.088600] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.632 [2024-12-15 13:34:32.088665] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.632 [2024-12-15 13:34:32.088672] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.632 [2024-12-15 13:34:32.088676] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.088680] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.632 [2024-12-15 13:34:32.088690] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.088695] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.088698] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.632 [2024-12-15 13:34:32.088705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.632 [2024-12-15 13:34:32.088724] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.632 [2024-12-15 13:34:32.088779] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.632 [2024-12-15 13:34:32.088785] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.632 [2024-12-15 13:34:32.088788] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.088792] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.632 [2024-12-15 13:34:32.088803] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.088807] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.088811] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.632 [2024-12-15 13:34:32.088817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.632 [2024-12-15 13:34:32.088834] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.632 [2024-12-15 13:34:32.088922] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.632 [2024-12-15 13:34:32.088933] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.632 [2024-12-15 13:34:32.088937] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.088941] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.632 [2024-12-15 13:34:32.088952] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.088956] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.088960] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.632 [2024-12-15 13:34:32.088966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.632 [2024-12-15 13:34:32.088984] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.632 [2024-12-15 13:34:32.089037] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.632 [2024-12-15 13:34:32.089043] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.632 [2024-12-15 13:34:32.089046] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.089050] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.632 [2024-12-15 13:34:32.089060] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.089065] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.089068] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.632 [2024-12-15 13:34:32.089075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.632 [2024-12-15 13:34:32.089092] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.632 [2024-12-15 13:34:32.089141] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.632 [2024-12-15 13:34:32.089155] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.632 [2024-12-15 13:34:32.089159] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.089163] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.632 [2024-12-15 13:34:32.089174] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.089178] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.089182] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.632 [2024-12-15 13:34:32.089189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.632 [2024-12-15 13:34:32.089207] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.632 [2024-12-15 13:34:32.089257] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.632 [2024-12-15 13:34:32.089263] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.632 [2024-12-15 13:34:32.089266] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.089270] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.632 [2024-12-15 13:34:32.089280] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.089285] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.089288] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.632 [2024-12-15 13:34:32.089295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.632 [2024-12-15 13:34:32.089312] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.632 [2024-12-15 13:34:32.089369] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.632 [2024-12-15 13:34:32.089376] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.632 [2024-12-15 13:34:32.089379] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.089383] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.632 [2024-12-15 13:34:32.089404] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.089408] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.089412] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.632 [2024-12-15 13:34:32.089418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.632 [2024-12-15 13:34:32.089435] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.632 [2024-12-15 13:34:32.089543] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.632 [2024-12-15 13:34:32.089551] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.632 [2024-12-15 13:34:32.089555] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.089559] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.632 [2024-12-15 13:34:32.089571] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.089576] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.632 [2024-12-15 13:34:32.089579] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.633 [2024-12-15 13:34:32.089599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.633 [2024-12-15 13:34:32.089638] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.633 [2024-12-15 13:34:32.089700] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.633 [2024-12-15 13:34:32.089707] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.633 [2024-12-15 13:34:32.089710] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.633 [2024-12-15 13:34:32.089715] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.633 [2024-12-15 13:34:32.089726] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.633 [2024-12-15 13:34:32.089731] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.633 [2024-12-15 13:34:32.089735] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.633 [2024-12-15 13:34:32.089742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.633 [2024-12-15 13:34:32.089761] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.633 [2024-12-15 13:34:32.089816] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.633 [2024-12-15 13:34:32.089831] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.633 [2024-12-15 13:34:32.089851] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.633 [2024-12-15 13:34:32.089855] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.633 [2024-12-15 13:34:32.089881] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.633 [2024-12-15 13:34:32.089886] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.633 [2024-12-15 13:34:32.089890] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.633 [2024-12-15 13:34:32.089897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.633 [2024-12-15 13:34:32.089915] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.633 [2024-12-15 13:34:32.089984] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.633 [2024-12-15 13:34:32.089994] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.633 [2024-12-15 13:34:32.089998] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.633 [2024-12-15 13:34:32.090002] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.633 [2024-12-15 13:34:32.090017] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.633 [2024-12-15 13:34:32.090022] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.633 [2024-12-15 13:34:32.090025] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.633 [2024-12-15 13:34:32.090032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.633 [2024-12-15 13:34:32.090050] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.633 [2024-12-15 13:34:32.090100] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.633 [2024-12-15 13:34:32.090106] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.633 [2024-12-15 13:34:32.090109] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.633 [2024-12-15 13:34:32.090113] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.633 [2024-12-15 13:34:32.090123] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.633 [2024-12-15 13:34:32.090128] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.633 [2024-12-15 13:34:32.090131] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.633 [2024-12-15 13:34:32.090138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.633 [2024-12-15 13:34:32.090155] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.633 [2024-12-15 13:34:32.090207] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.633 [2024-12-15 13:34:32.090214] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.633 [2024-12-15 13:34:32.090217] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.633 [2024-12-15 13:34:32.090221] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.633 [2024-12-15 13:34:32.090246] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.633 [2024-12-15 13:34:32.090250] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.633 [2024-12-15 13:34:32.090254] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.633 [2024-12-15 13:34:32.090260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.633 [2024-12-15 13:34:32.090277] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.633 [2024-12-15 13:34:32.090328] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.633 [2024-12-15 13:34:32.090341] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.633 [2024-12-15 13:34:32.090345] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.633 [2024-12-15 13:34:32.090349] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.633 [2024-12-15 13:34:32.090360] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.633 [2024-12-15 13:34:32.090364] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.633 [2024-12-15 13:34:32.090368] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.633 [2024-12-15 13:34:32.090374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.633 [2024-12-15 13:34:32.090392] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.633 [2024-12-15 13:34:32.090440] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.633 [2024-12-15 13:34:32.090450] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.633 [2024-12-15 13:34:32.090454] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.633 [2024-12-15 13:34:32.090457] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.633 [2024-12-15 13:34:32.090482] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.633 [2024-12-15 13:34:32.090487] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.633 [2024-12-15 13:34:32.090490] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.633 [2024-12-15 13:34:32.090496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.633 [2024-12-15 13:34:32.090513] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.633 [2024-12-15 13:34:32.090580] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.633 [2024-12-15 13:34:32.094625] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.633 [2024-12-15 13:34:32.094652] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.633 [2024-12-15 13:34:32.094657] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.633 [2024-12-15 13:34:32.094673] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.633 [2024-12-15 13:34:32.094678] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.633 [2024-12-15 13:34:32.094681] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15a4510) 00:20:26.633 [2024-12-15 13:34:32.094689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.633 [2024-12-15 13:34:32.094716] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15f0cc0, cid 3, qid 0 00:20:26.633 [2024-12-15 13:34:32.094811] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.633 [2024-12-15 13:34:32.094818] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.633 [2024-12-15 13:34:32.094821] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.633 [2024-12-15 13:34:32.094825] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x15f0cc0) on tqpair=0x15a4510 00:20:26.633 [2024-12-15 13:34:32.094833] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 8 milliseconds 00:20:26.633 0 Kelvin (-273 Celsius) 00:20:26.633 Available Spare: 0% 00:20:26.633 Available Spare Threshold: 0% 00:20:26.633 Life Percentage Used: 0% 00:20:26.633 Data Units Read: 0 00:20:26.633 Data Units Written: 0 00:20:26.633 Host Read Commands: 0 00:20:26.633 Host Write Commands: 0 00:20:26.633 Controller Busy Time: 0 minutes 00:20:26.633 Power Cycles: 0 00:20:26.633 Power On Hours: 0 hours 00:20:26.633 Unsafe Shutdowns: 0 00:20:26.633 Unrecoverable Media Errors: 0 00:20:26.633 Lifetime Error Log Entries: 0 00:20:26.633 Warning Temperature Time: 0 minutes 00:20:26.633 Critical Temperature Time: 0 minutes 00:20:26.633 00:20:26.633 Number of Queues 00:20:26.633 ================ 00:20:26.633 Number of I/O Submission Queues: 127 00:20:26.633 Number of I/O Completion Queues: 127 00:20:26.633 00:20:26.633 Active Namespaces 00:20:26.633 ================= 00:20:26.633 Namespace ID:1 00:20:26.633 Error Recovery Timeout: Unlimited 00:20:26.633 Command Set Identifier: NVM (00h) 00:20:26.633 Deallocate: Supported 00:20:26.633 Deallocated/Unwritten Error: Not Supported 00:20:26.633 Deallocated Read Value: Unknown 00:20:26.633 Deallocate in Write Zeroes: Not Supported 00:20:26.634 Deallocated Guard Field: 0xFFFF 00:20:26.634 Flush: Supported 00:20:26.634 Reservation: Supported 00:20:26.634 Namespace Sharing Capabilities: Multiple Controllers 00:20:26.634 Size (in LBAs): 131072 (0GiB) 00:20:26.634 Capacity (in LBAs): 131072 (0GiB) 00:20:26.634 Utilization (in LBAs): 131072 (0GiB) 00:20:26.634 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:26.634 EUI64: ABCDEF0123456789 00:20:26.634 UUID: eeaef630-0058-490c-bc41-76fd8477c4fb 00:20:26.634 Thin Provisioning: Not Supported 00:20:26.634 Per-NS Atomic Units: Yes 00:20:26.634 Atomic Boundary Size (Normal): 0 00:20:26.634 Atomic Boundary Size (PFail): 0 00:20:26.634 Atomic Boundary Offset: 0 00:20:26.634 Maximum Single Source Range Length: 65535 00:20:26.634 Maximum Copy Length: 65535 00:20:26.634 Maximum Source Range Count: 1 00:20:26.634 NGUID/EUI64 Never Reused: No 00:20:26.634 Namespace Write Protected: No 00:20:26.634 Number of LBA Formats: 1 00:20:26.634 Current LBA Format: LBA Format #00 00:20:26.634 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:26.634 00:20:26.634 13:34:32 -- host/identify.sh@51 -- # sync 00:20:26.634 13:34:32 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:26.634 13:34:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.634 13:34:32 -- common/autotest_common.sh@10 -- # set +x 00:20:26.634 13:34:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.634 13:34:32 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:26.634 13:34:32 -- host/identify.sh@56 -- # nvmftestfini 00:20:26.634 13:34:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:26.634 13:34:32 -- nvmf/common.sh@116 -- # sync 00:20:26.634 13:34:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:26.634 13:34:32 -- nvmf/common.sh@119 -- # set +e 00:20:26.634 13:34:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:26.634 13:34:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:26.634 rmmod nvme_tcp 00:20:26.634 rmmod nvme_fabrics 00:20:26.634 rmmod nvme_keyring 00:20:26.634 13:34:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:26.634 13:34:32 -- nvmf/common.sh@123 -- # set -e 00:20:26.634 13:34:32 -- nvmf/common.sh@124 -- # return 0 00:20:26.634 13:34:32 -- nvmf/common.sh@477 -- # '[' -n 93489 ']' 00:20:26.634 13:34:32 -- nvmf/common.sh@478 -- # killprocess 93489 00:20:26.634 13:34:32 -- common/autotest_common.sh@936 -- # '[' -z 93489 ']' 00:20:26.634 13:34:32 -- common/autotest_common.sh@940 -- # kill -0 93489 00:20:26.634 13:34:32 -- common/autotest_common.sh@941 -- # uname 00:20:26.634 13:34:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:26.634 13:34:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93489 00:20:26.634 13:34:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:26.634 13:34:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:26.634 killing process with pid 93489 00:20:26.634 13:34:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93489' 00:20:26.634 13:34:32 -- common/autotest_common.sh@955 -- # kill 93489 00:20:26.634 [2024-12-15 13:34:32.266506] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:26.634 13:34:32 -- common/autotest_common.sh@960 -- # wait 93489 00:20:26.893 13:34:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:26.893 13:34:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:26.893 13:34:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:26.893 13:34:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:26.893 13:34:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:26.893 13:34:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.893 13:34:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:26.893 13:34:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.893 13:34:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:26.893 00:20:26.893 real 0m2.682s 00:20:26.893 user 0m7.631s 00:20:26.893 sys 0m0.682s 00:20:26.893 13:34:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:26.893 13:34:32 -- common/autotest_common.sh@10 -- # set +x 00:20:26.893 ************************************ 00:20:26.893 END TEST nvmf_identify 00:20:26.893 ************************************ 00:20:27.153 13:34:32 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:27.153 13:34:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:27.153 13:34:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:27.153 13:34:32 -- common/autotest_common.sh@10 -- # set +x 00:20:27.153 ************************************ 00:20:27.153 START TEST nvmf_perf 00:20:27.153 ************************************ 00:20:27.153 13:34:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:27.153 * Looking for test storage... 00:20:27.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:27.153 13:34:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:27.153 13:34:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:27.153 13:34:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:27.153 13:34:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:27.153 13:34:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:27.153 13:34:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:27.153 13:34:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:27.153 13:34:32 -- scripts/common.sh@335 -- # IFS=.-: 00:20:27.153 13:34:32 -- scripts/common.sh@335 -- # read -ra ver1 00:20:27.153 13:34:32 -- scripts/common.sh@336 -- # IFS=.-: 00:20:27.153 13:34:32 -- scripts/common.sh@336 -- # read -ra ver2 00:20:27.153 13:34:32 -- scripts/common.sh@337 -- # local 'op=<' 00:20:27.153 13:34:32 -- scripts/common.sh@339 -- # ver1_l=2 00:20:27.153 13:34:32 -- scripts/common.sh@340 -- # ver2_l=1 00:20:27.153 13:34:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:27.153 13:34:32 -- scripts/common.sh@343 -- # case "$op" in 00:20:27.153 13:34:32 -- scripts/common.sh@344 -- # : 1 00:20:27.153 13:34:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:27.153 13:34:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:27.153 13:34:32 -- scripts/common.sh@364 -- # decimal 1 00:20:27.153 13:34:32 -- scripts/common.sh@352 -- # local d=1 00:20:27.153 13:34:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:27.153 13:34:32 -- scripts/common.sh@354 -- # echo 1 00:20:27.153 13:34:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:27.153 13:34:32 -- scripts/common.sh@365 -- # decimal 2 00:20:27.153 13:34:32 -- scripts/common.sh@352 -- # local d=2 00:20:27.153 13:34:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:27.153 13:34:32 -- scripts/common.sh@354 -- # echo 2 00:20:27.153 13:34:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:27.153 13:34:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:27.153 13:34:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:27.153 13:34:32 -- scripts/common.sh@367 -- # return 0 00:20:27.153 13:34:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:27.153 13:34:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:27.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.153 --rc genhtml_branch_coverage=1 00:20:27.153 --rc genhtml_function_coverage=1 00:20:27.153 --rc genhtml_legend=1 00:20:27.153 --rc geninfo_all_blocks=1 00:20:27.153 --rc geninfo_unexecuted_blocks=1 00:20:27.153 00:20:27.153 ' 00:20:27.153 13:34:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:27.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.153 --rc genhtml_branch_coverage=1 00:20:27.153 --rc genhtml_function_coverage=1 00:20:27.153 --rc genhtml_legend=1 00:20:27.153 --rc geninfo_all_blocks=1 00:20:27.153 --rc geninfo_unexecuted_blocks=1 00:20:27.153 00:20:27.153 ' 00:20:27.153 13:34:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:27.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.153 --rc genhtml_branch_coverage=1 00:20:27.153 --rc genhtml_function_coverage=1 00:20:27.153 --rc genhtml_legend=1 00:20:27.153 --rc geninfo_all_blocks=1 00:20:27.153 --rc geninfo_unexecuted_blocks=1 00:20:27.153 00:20:27.153 ' 00:20:27.153 13:34:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:27.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.153 --rc genhtml_branch_coverage=1 00:20:27.153 --rc genhtml_function_coverage=1 00:20:27.153 --rc genhtml_legend=1 00:20:27.153 --rc geninfo_all_blocks=1 00:20:27.153 --rc geninfo_unexecuted_blocks=1 00:20:27.153 00:20:27.153 ' 00:20:27.153 13:34:32 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:27.153 13:34:32 -- nvmf/common.sh@7 -- # uname -s 00:20:27.153 13:34:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:27.153 13:34:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:27.153 13:34:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:27.153 13:34:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:27.153 13:34:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:27.153 13:34:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:27.153 13:34:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:27.153 13:34:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:27.153 13:34:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:27.153 13:34:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:27.153 13:34:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:20:27.153 13:34:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:20:27.153 13:34:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:27.153 13:34:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:27.153 13:34:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:27.153 13:34:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:27.153 13:34:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:27.153 13:34:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:27.153 13:34:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:27.153 13:34:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.153 13:34:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.153 13:34:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.153 13:34:32 -- paths/export.sh@5 -- # export PATH 00:20:27.153 13:34:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.153 13:34:32 -- nvmf/common.sh@46 -- # : 0 00:20:27.153 13:34:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:27.153 13:34:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:27.153 13:34:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:27.153 13:34:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:27.153 13:34:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:27.153 13:34:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:27.153 13:34:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:27.153 13:34:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:27.153 13:34:32 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:27.153 13:34:32 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:27.153 13:34:32 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:27.153 13:34:32 -- host/perf.sh@17 -- # nvmftestinit 00:20:27.153 13:34:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:27.153 13:34:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:27.153 13:34:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:27.153 13:34:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:27.153 13:34:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:27.153 13:34:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.153 13:34:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:27.154 13:34:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.154 13:34:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:27.154 13:34:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:27.154 13:34:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:27.154 13:34:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:27.154 13:34:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:27.154 13:34:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:27.154 13:34:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:27.154 13:34:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:27.154 13:34:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:27.154 13:34:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:27.154 13:34:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:27.154 13:34:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:27.154 13:34:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:27.154 13:34:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:27.154 13:34:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:27.154 13:34:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:27.154 13:34:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:27.154 13:34:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:27.154 13:34:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:27.154 13:34:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:27.154 Cannot find device "nvmf_tgt_br" 00:20:27.154 13:34:32 -- nvmf/common.sh@154 -- # true 00:20:27.154 13:34:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:27.154 Cannot find device "nvmf_tgt_br2" 00:20:27.154 13:34:32 -- nvmf/common.sh@155 -- # true 00:20:27.154 13:34:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:27.154 13:34:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:27.154 Cannot find device "nvmf_tgt_br" 00:20:27.154 13:34:32 -- nvmf/common.sh@157 -- # true 00:20:27.154 13:34:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:27.413 Cannot find device "nvmf_tgt_br2" 00:20:27.413 13:34:32 -- nvmf/common.sh@158 -- # true 00:20:27.413 13:34:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:27.413 13:34:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:27.413 13:34:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:27.413 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:27.413 13:34:32 -- nvmf/common.sh@161 -- # true 00:20:27.413 13:34:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:27.413 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:27.413 13:34:32 -- nvmf/common.sh@162 -- # true 00:20:27.413 13:34:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:27.413 13:34:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:27.413 13:34:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:27.413 13:34:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:27.413 13:34:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:27.413 13:34:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:27.413 13:34:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:27.413 13:34:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:27.413 13:34:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:27.413 13:34:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:27.413 13:34:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:27.413 13:34:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:27.413 13:34:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:27.413 13:34:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:27.413 13:34:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:27.413 13:34:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:27.413 13:34:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:27.413 13:34:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:27.413 13:34:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:27.413 13:34:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:27.413 13:34:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:27.413 13:34:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:27.413 13:34:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:27.413 13:34:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:27.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:27.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:20:27.413 00:20:27.413 --- 10.0.0.2 ping statistics --- 00:20:27.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.413 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:20:27.413 13:34:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:27.413 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:27.413 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:20:27.413 00:20:27.413 --- 10.0.0.3 ping statistics --- 00:20:27.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.414 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:20:27.414 13:34:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:27.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:27.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:20:27.414 00:20:27.414 --- 10.0.0.1 ping statistics --- 00:20:27.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.414 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:20:27.414 13:34:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:27.414 13:34:33 -- nvmf/common.sh@421 -- # return 0 00:20:27.414 13:34:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:27.414 13:34:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:27.414 13:34:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:27.414 13:34:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:27.414 13:34:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:27.414 13:34:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:27.414 13:34:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:27.673 13:34:33 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:27.673 13:34:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:27.673 13:34:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:27.673 13:34:33 -- common/autotest_common.sh@10 -- # set +x 00:20:27.673 13:34:33 -- nvmf/common.sh@469 -- # nvmfpid=93720 00:20:27.673 13:34:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:27.673 13:34:33 -- nvmf/common.sh@470 -- # waitforlisten 93720 00:20:27.673 13:34:33 -- common/autotest_common.sh@829 -- # '[' -z 93720 ']' 00:20:27.673 13:34:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.673 13:34:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:27.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.673 13:34:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.673 13:34:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:27.673 13:34:33 -- common/autotest_common.sh@10 -- # set +x 00:20:27.673 [2024-12-15 13:34:33.166036] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:27.673 [2024-12-15 13:34:33.166118] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:27.673 [2024-12-15 13:34:33.306299] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:27.931 [2024-12-15 13:34:33.368617] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:27.931 [2024-12-15 13:34:33.368782] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:27.931 [2024-12-15 13:34:33.368795] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:27.931 [2024-12-15 13:34:33.368803] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:27.931 [2024-12-15 13:34:33.369349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:27.931 [2024-12-15 13:34:33.369441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:27.931 [2024-12-15 13:34:33.369970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:27.931 [2024-12-15 13:34:33.369975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.498 13:34:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:28.498 13:34:34 -- common/autotest_common.sh@862 -- # return 0 00:20:28.498 13:34:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:28.498 13:34:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:28.498 13:34:34 -- common/autotest_common.sh@10 -- # set +x 00:20:28.757 13:34:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:28.757 13:34:34 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:28.757 13:34:34 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:29.016 13:34:34 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:29.016 13:34:34 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:29.275 13:34:34 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:20:29.275 13:34:34 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:29.534 13:34:35 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:29.534 13:34:35 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:20:29.534 13:34:35 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:29.534 13:34:35 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:29.534 13:34:35 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:29.792 [2024-12-15 13:34:35.431271] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.792 13:34:35 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:30.050 13:34:35 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:30.050 13:34:35 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:30.309 13:34:35 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:30.309 13:34:35 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:30.580 13:34:36 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:30.853 [2024-12-15 13:34:36.312419] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:30.853 13:34:36 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:31.112 13:34:36 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:20:31.112 13:34:36 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:31.112 13:34:36 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:31.112 13:34:36 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:32.048 Initializing NVMe Controllers 00:20:32.048 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:20:32.048 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:20:32.048 Initialization complete. Launching workers. 00:20:32.048 ======================================================== 00:20:32.048 Latency(us) 00:20:32.048 Device Information : IOPS MiB/s Average min max 00:20:32.048 PCIE (0000:00:06.0) NSID 1 from core 0: 19935.98 77.87 1604.80 432.64 9269.79 00:20:32.048 ======================================================== 00:20:32.048 Total : 19935.98 77.87 1604.80 432.64 9269.79 00:20:32.048 00:20:32.048 13:34:37 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:33.425 Initializing NVMe Controllers 00:20:33.425 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:33.425 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:33.425 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:33.425 Initialization complete. Launching workers. 00:20:33.425 ======================================================== 00:20:33.425 Latency(us) 00:20:33.425 Device Information : IOPS MiB/s Average min max 00:20:33.425 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3510.74 13.71 284.56 99.68 5257.42 00:20:33.425 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.50 0.48 8160.89 6969.59 12032.95 00:20:33.425 ======================================================== 00:20:33.425 Total : 3634.23 14.20 552.21 99.68 12032.95 00:20:33.425 00:20:33.425 13:34:38 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:34.802 [2024-12-15 13:34:40.246690] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12410f0 is same with the state(5) to be set 00:20:34.802 [2024-12-15 13:34:40.246764] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12410f0 is same with the state(5) to be set 00:20:34.802 [2024-12-15 13:34:40.246775] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12410f0 is same with the state(5) to be set 00:20:34.802 [2024-12-15 13:34:40.246783] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12410f0 is same with the state(5) to be set 00:20:34.802 [2024-12-15 13:34:40.246790] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12410f0 is same with the state(5) to be set 00:20:34.802 [2024-12-15 13:34:40.246798] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12410f0 is same with the state(5) to be set 00:20:34.802 [2024-12-15 13:34:40.246817] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12410f0 is same with the state(5) to be set 00:20:34.802 [2024-12-15 13:34:40.246824] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12410f0 is same with the state(5) to be set 00:20:34.802 [2024-12-15 13:34:40.246832] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12410f0 is same with the state(5) to be set 00:20:34.802 Initializing NVMe Controllers 00:20:34.802 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:34.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:34.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:34.802 Initialization complete. Launching workers. 00:20:34.802 ======================================================== 00:20:34.802 Latency(us) 00:20:34.802 Device Information : IOPS MiB/s Average min max 00:20:34.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9737.00 38.04 3288.42 593.85 7534.44 00:20:34.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2738.00 10.70 11794.50 6556.44 20927.56 00:20:34.802 ======================================================== 00:20:34.802 Total : 12475.00 48.73 5155.32 593.85 20927.56 00:20:34.802 00:20:34.802 13:34:40 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:34.802 13:34:40 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:37.335 [2024-12-15 13:34:42.807698] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121c740 is same with the state(5) to be set 00:20:37.335 [2024-12-15 13:34:42.807779] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121c740 is same with the state(5) to be set 00:20:37.335 [2024-12-15 13:34:42.807790] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121c740 is same with the state(5) to be set 00:20:37.335 Initializing NVMe Controllers 00:20:37.335 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:37.335 Controller IO queue size 128, less than required. 00:20:37.335 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:37.335 Controller IO queue size 128, less than required. 00:20:37.335 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:37.335 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:37.335 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:37.335 Initialization complete. Launching workers. 00:20:37.335 ======================================================== 00:20:37.335 Latency(us) 00:20:37.335 Device Information : IOPS MiB/s Average min max 00:20:37.335 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1549.09 387.27 83435.89 53785.45 134368.35 00:20:37.335 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 568.20 142.05 235067.01 83947.06 382709.68 00:20:37.335 ======================================================== 00:20:37.335 Total : 2117.30 529.32 124127.86 53785.45 382709.68 00:20:37.335 00:20:37.335 13:34:42 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:37.594 No valid NVMe controllers or AIO or URING devices found 00:20:37.594 Initializing NVMe Controllers 00:20:37.594 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:37.594 Controller IO queue size 128, less than required. 00:20:37.594 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:37.594 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:37.594 Controller IO queue size 128, less than required. 00:20:37.594 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:37.594 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:37.594 WARNING: Some requested NVMe devices were skipped 00:20:37.594 13:34:43 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:40.124 Initializing NVMe Controllers 00:20:40.124 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:40.124 Controller IO queue size 128, less than required. 00:20:40.124 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:40.124 Controller IO queue size 128, less than required. 00:20:40.124 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:40.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:40.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:40.125 Initialization complete. Launching workers. 00:20:40.125 00:20:40.125 ==================== 00:20:40.125 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:40.125 TCP transport: 00:20:40.125 polls: 12129 00:20:40.125 idle_polls: 8919 00:20:40.125 sock_completions: 3210 00:20:40.125 nvme_completions: 4148 00:20:40.125 submitted_requests: 6300 00:20:40.125 queued_requests: 1 00:20:40.125 00:20:40.125 ==================== 00:20:40.125 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:40.125 TCP transport: 00:20:40.125 polls: 6294 00:20:40.125 idle_polls: 3261 00:20:40.125 sock_completions: 3033 00:20:40.125 nvme_completions: 4503 00:20:40.125 submitted_requests: 6960 00:20:40.125 queued_requests: 1 00:20:40.125 ======================================================== 00:20:40.125 Latency(us) 00:20:40.125 Device Information : IOPS MiB/s Average min max 00:20:40.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1099.92 274.98 119789.15 65930.59 205446.11 00:20:40.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1188.87 297.22 108345.15 51050.42 163102.10 00:20:40.125 ======================================================== 00:20:40.125 Total : 2288.78 572.20 113844.77 51050.42 205446.11 00:20:40.125 00:20:40.125 13:34:45 -- host/perf.sh@66 -- # sync 00:20:40.125 13:34:45 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:40.383 13:34:46 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:20:40.383 13:34:46 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:20:40.383 13:34:46 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:20:40.949 13:34:46 -- host/perf.sh@72 -- # ls_guid=22b0e674-a8a7-4f5e-a752-1037848dcde6 00:20:40.949 13:34:46 -- host/perf.sh@73 -- # get_lvs_free_mb 22b0e674-a8a7-4f5e-a752-1037848dcde6 00:20:40.949 13:34:46 -- common/autotest_common.sh@1353 -- # local lvs_uuid=22b0e674-a8a7-4f5e-a752-1037848dcde6 00:20:40.949 13:34:46 -- common/autotest_common.sh@1354 -- # local lvs_info 00:20:40.949 13:34:46 -- common/autotest_common.sh@1355 -- # local fc 00:20:40.949 13:34:46 -- common/autotest_common.sh@1356 -- # local cs 00:20:40.949 13:34:46 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:40.949 13:34:46 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:20:40.949 { 00:20:40.949 "base_bdev": "Nvme0n1", 00:20:40.949 "block_size": 4096, 00:20:40.949 "cluster_size": 4194304, 00:20:40.949 "free_clusters": 1278, 00:20:40.949 "name": "lvs_0", 00:20:40.949 "total_data_clusters": 1278, 00:20:40.949 "uuid": "22b0e674-a8a7-4f5e-a752-1037848dcde6" 00:20:40.949 } 00:20:40.949 ]' 00:20:40.949 13:34:46 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="22b0e674-a8a7-4f5e-a752-1037848dcde6") .free_clusters' 00:20:40.949 13:34:46 -- common/autotest_common.sh@1358 -- # fc=1278 00:20:40.949 13:34:46 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="22b0e674-a8a7-4f5e-a752-1037848dcde6") .cluster_size' 00:20:41.207 5112 00:20:41.207 13:34:46 -- common/autotest_common.sh@1359 -- # cs=4194304 00:20:41.207 13:34:46 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:20:41.207 13:34:46 -- common/autotest_common.sh@1363 -- # echo 5112 00:20:41.207 13:34:46 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:20:41.207 13:34:46 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 22b0e674-a8a7-4f5e-a752-1037848dcde6 lbd_0 5112 00:20:41.465 13:34:46 -- host/perf.sh@80 -- # lb_guid=075c731d-b364-4591-999b-0017b7af020a 00:20:41.465 13:34:46 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 075c731d-b364-4591-999b-0017b7af020a lvs_n_0 00:20:41.723 13:34:47 -- host/perf.sh@83 -- # ls_nested_guid=f0173b77-5c1d-4968-b05b-25ed94047f51 00:20:41.723 13:34:47 -- host/perf.sh@84 -- # get_lvs_free_mb f0173b77-5c1d-4968-b05b-25ed94047f51 00:20:41.723 13:34:47 -- common/autotest_common.sh@1353 -- # local lvs_uuid=f0173b77-5c1d-4968-b05b-25ed94047f51 00:20:41.723 13:34:47 -- common/autotest_common.sh@1354 -- # local lvs_info 00:20:41.723 13:34:47 -- common/autotest_common.sh@1355 -- # local fc 00:20:41.723 13:34:47 -- common/autotest_common.sh@1356 -- # local cs 00:20:41.723 13:34:47 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:41.981 13:34:47 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:20:41.981 { 00:20:41.981 "base_bdev": "Nvme0n1", 00:20:41.981 "block_size": 4096, 00:20:41.981 "cluster_size": 4194304, 00:20:41.981 "free_clusters": 0, 00:20:41.981 "name": "lvs_0", 00:20:41.981 "total_data_clusters": 1278, 00:20:41.981 "uuid": "22b0e674-a8a7-4f5e-a752-1037848dcde6" 00:20:41.981 }, 00:20:41.981 { 00:20:41.981 "base_bdev": "075c731d-b364-4591-999b-0017b7af020a", 00:20:41.981 "block_size": 4096, 00:20:41.981 "cluster_size": 4194304, 00:20:41.981 "free_clusters": 1276, 00:20:41.981 "name": "lvs_n_0", 00:20:41.981 "total_data_clusters": 1276, 00:20:41.981 "uuid": "f0173b77-5c1d-4968-b05b-25ed94047f51" 00:20:41.981 } 00:20:41.981 ]' 00:20:41.981 13:34:47 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="f0173b77-5c1d-4968-b05b-25ed94047f51") .free_clusters' 00:20:41.981 13:34:47 -- common/autotest_common.sh@1358 -- # fc=1276 00:20:41.981 13:34:47 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="f0173b77-5c1d-4968-b05b-25ed94047f51") .cluster_size' 00:20:41.981 5104 00:20:41.981 13:34:47 -- common/autotest_common.sh@1359 -- # cs=4194304 00:20:41.981 13:34:47 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:20:41.981 13:34:47 -- common/autotest_common.sh@1363 -- # echo 5104 00:20:41.981 13:34:47 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:20:41.981 13:34:47 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f0173b77-5c1d-4968-b05b-25ed94047f51 lbd_nest_0 5104 00:20:42.547 13:34:47 -- host/perf.sh@88 -- # lb_nested_guid=d62b4e26-fc06-4be5-a0e2-9324f5517fa4 00:20:42.547 13:34:47 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:42.547 13:34:48 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:20:42.547 13:34:48 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 d62b4e26-fc06-4be5-a0e2-9324f5517fa4 00:20:42.806 13:34:48 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:43.064 13:34:48 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:20:43.064 13:34:48 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:20:43.064 13:34:48 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:43.064 13:34:48 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:43.064 13:34:48 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:43.322 No valid NVMe controllers or AIO or URING devices found 00:20:43.322 Initializing NVMe Controllers 00:20:43.322 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:43.322 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:43.322 WARNING: Some requested NVMe devices were skipped 00:20:43.322 13:34:48 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:43.323 13:34:48 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:55.524 Initializing NVMe Controllers 00:20:55.524 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:55.524 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:55.524 Initialization complete. Launching workers. 00:20:55.524 ======================================================== 00:20:55.524 Latency(us) 00:20:55.524 Device Information : IOPS MiB/s Average min max 00:20:55.524 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 831.61 103.95 1202.10 355.20 10669.69 00:20:55.524 ======================================================== 00:20:55.524 Total : 831.61 103.95 1202.10 355.20 10669.69 00:20:55.524 00:20:55.524 13:34:59 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:55.524 13:34:59 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:55.524 13:34:59 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:55.524 No valid NVMe controllers or AIO or URING devices found 00:20:55.524 Initializing NVMe Controllers 00:20:55.524 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:55.524 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:55.524 WARNING: Some requested NVMe devices were skipped 00:20:55.524 13:34:59 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:55.524 13:34:59 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:05.514 Initializing NVMe Controllers 00:21:05.514 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:05.515 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:05.515 Initialization complete. Launching workers. 00:21:05.515 ======================================================== 00:21:05.515 Latency(us) 00:21:05.515 Device Information : IOPS MiB/s Average min max 00:21:05.515 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 997.79 124.72 32080.27 7905.73 447928.46 00:21:05.515 ======================================================== 00:21:05.515 Total : 997.79 124.72 32080.27 7905.73 447928.46 00:21:05.515 00:21:05.515 13:35:09 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:05.515 13:35:09 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:05.515 13:35:09 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:05.515 No valid NVMe controllers or AIO or URING devices found 00:21:05.515 Initializing NVMe Controllers 00:21:05.515 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:05.515 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:05.515 WARNING: Some requested NVMe devices were skipped 00:21:05.515 13:35:10 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:05.515 13:35:10 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:15.490 Initializing NVMe Controllers 00:21:15.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:15.490 Controller IO queue size 128, less than required. 00:21:15.490 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:15.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:15.490 Initialization complete. Launching workers. 00:21:15.490 ======================================================== 00:21:15.490 Latency(us) 00:21:15.490 Device Information : IOPS MiB/s Average min max 00:21:15.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3939.07 492.38 32574.27 13296.65 72819.25 00:21:15.490 ======================================================== 00:21:15.490 Total : 3939.07 492.38 32574.27 13296.65 72819.25 00:21:15.490 00:21:15.490 13:35:20 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:15.490 13:35:20 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete d62b4e26-fc06-4be5-a0e2-9324f5517fa4 00:21:15.490 13:35:21 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:15.748 13:35:21 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 075c731d-b364-4591-999b-0017b7af020a 00:21:16.006 13:35:21 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:16.265 13:35:21 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:16.265 13:35:21 -- host/perf.sh@114 -- # nvmftestfini 00:21:16.265 13:35:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:16.265 13:35:21 -- nvmf/common.sh@116 -- # sync 00:21:16.265 13:35:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:16.265 13:35:21 -- nvmf/common.sh@119 -- # set +e 00:21:16.265 13:35:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:16.265 13:35:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:16.265 rmmod nvme_tcp 00:21:16.265 rmmod nvme_fabrics 00:21:16.600 rmmod nvme_keyring 00:21:16.600 13:35:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:16.600 13:35:21 -- nvmf/common.sh@123 -- # set -e 00:21:16.600 13:35:21 -- nvmf/common.sh@124 -- # return 0 00:21:16.600 13:35:21 -- nvmf/common.sh@477 -- # '[' -n 93720 ']' 00:21:16.600 13:35:21 -- nvmf/common.sh@478 -- # killprocess 93720 00:21:16.600 13:35:21 -- common/autotest_common.sh@936 -- # '[' -z 93720 ']' 00:21:16.600 13:35:21 -- common/autotest_common.sh@940 -- # kill -0 93720 00:21:16.600 13:35:21 -- common/autotest_common.sh@941 -- # uname 00:21:16.600 13:35:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:16.600 13:35:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93720 00:21:16.600 killing process with pid 93720 00:21:16.600 13:35:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:16.600 13:35:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:16.600 13:35:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93720' 00:21:16.600 13:35:22 -- common/autotest_common.sh@955 -- # kill 93720 00:21:16.600 13:35:22 -- common/autotest_common.sh@960 -- # wait 93720 00:21:17.976 13:35:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:17.976 13:35:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:17.976 13:35:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:17.976 13:35:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:17.976 13:35:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:17.976 13:35:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.976 13:35:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:17.976 13:35:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.976 13:35:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:17.976 ************************************ 00:21:17.976 END TEST nvmf_perf 00:21:17.976 ************************************ 00:21:17.976 00:21:17.976 real 0m51.062s 00:21:17.976 user 3m11.462s 00:21:17.976 sys 0m10.145s 00:21:17.976 13:35:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:17.976 13:35:23 -- common/autotest_common.sh@10 -- # set +x 00:21:18.235 13:35:23 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:18.235 13:35:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:18.235 13:35:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:18.235 13:35:23 -- common/autotest_common.sh@10 -- # set +x 00:21:18.236 ************************************ 00:21:18.236 START TEST nvmf_fio_host 00:21:18.236 ************************************ 00:21:18.236 13:35:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:18.236 * Looking for test storage... 00:21:18.236 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:18.236 13:35:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:18.236 13:35:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:18.236 13:35:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:18.236 13:35:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:18.236 13:35:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:18.236 13:35:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:18.236 13:35:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:18.236 13:35:23 -- scripts/common.sh@335 -- # IFS=.-: 00:21:18.236 13:35:23 -- scripts/common.sh@335 -- # read -ra ver1 00:21:18.236 13:35:23 -- scripts/common.sh@336 -- # IFS=.-: 00:21:18.236 13:35:23 -- scripts/common.sh@336 -- # read -ra ver2 00:21:18.236 13:35:23 -- scripts/common.sh@337 -- # local 'op=<' 00:21:18.236 13:35:23 -- scripts/common.sh@339 -- # ver1_l=2 00:21:18.236 13:35:23 -- scripts/common.sh@340 -- # ver2_l=1 00:21:18.236 13:35:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:18.236 13:35:23 -- scripts/common.sh@343 -- # case "$op" in 00:21:18.236 13:35:23 -- scripts/common.sh@344 -- # : 1 00:21:18.236 13:35:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:18.236 13:35:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:18.236 13:35:23 -- scripts/common.sh@364 -- # decimal 1 00:21:18.236 13:35:23 -- scripts/common.sh@352 -- # local d=1 00:21:18.236 13:35:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:18.236 13:35:23 -- scripts/common.sh@354 -- # echo 1 00:21:18.236 13:35:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:18.236 13:35:23 -- scripts/common.sh@365 -- # decimal 2 00:21:18.236 13:35:23 -- scripts/common.sh@352 -- # local d=2 00:21:18.236 13:35:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:18.236 13:35:23 -- scripts/common.sh@354 -- # echo 2 00:21:18.236 13:35:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:18.236 13:35:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:18.236 13:35:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:18.236 13:35:23 -- scripts/common.sh@367 -- # return 0 00:21:18.236 13:35:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:18.236 13:35:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:18.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.236 --rc genhtml_branch_coverage=1 00:21:18.236 --rc genhtml_function_coverage=1 00:21:18.236 --rc genhtml_legend=1 00:21:18.236 --rc geninfo_all_blocks=1 00:21:18.236 --rc geninfo_unexecuted_blocks=1 00:21:18.236 00:21:18.236 ' 00:21:18.236 13:35:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:18.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.236 --rc genhtml_branch_coverage=1 00:21:18.236 --rc genhtml_function_coverage=1 00:21:18.236 --rc genhtml_legend=1 00:21:18.236 --rc geninfo_all_blocks=1 00:21:18.236 --rc geninfo_unexecuted_blocks=1 00:21:18.236 00:21:18.236 ' 00:21:18.236 13:35:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:18.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.236 --rc genhtml_branch_coverage=1 00:21:18.236 --rc genhtml_function_coverage=1 00:21:18.236 --rc genhtml_legend=1 00:21:18.236 --rc geninfo_all_blocks=1 00:21:18.236 --rc geninfo_unexecuted_blocks=1 00:21:18.236 00:21:18.236 ' 00:21:18.236 13:35:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:18.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.236 --rc genhtml_branch_coverage=1 00:21:18.236 --rc genhtml_function_coverage=1 00:21:18.236 --rc genhtml_legend=1 00:21:18.236 --rc geninfo_all_blocks=1 00:21:18.236 --rc geninfo_unexecuted_blocks=1 00:21:18.236 00:21:18.236 ' 00:21:18.236 13:35:23 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:18.236 13:35:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:18.236 13:35:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:18.236 13:35:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:18.236 13:35:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.236 13:35:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.236 13:35:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.236 13:35:23 -- paths/export.sh@5 -- # export PATH 00:21:18.236 13:35:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.236 13:35:23 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:18.236 13:35:23 -- nvmf/common.sh@7 -- # uname -s 00:21:18.236 13:35:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:18.236 13:35:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:18.236 13:35:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:18.236 13:35:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:18.236 13:35:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:18.236 13:35:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:18.236 13:35:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:18.236 13:35:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:18.236 13:35:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:18.236 13:35:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:18.236 13:35:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:21:18.236 13:35:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:21:18.236 13:35:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:18.236 13:35:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:18.236 13:35:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:18.236 13:35:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:18.236 13:35:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:18.236 13:35:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:18.236 13:35:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:18.236 13:35:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.236 13:35:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.236 13:35:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.236 13:35:23 -- paths/export.sh@5 -- # export PATH 00:21:18.237 13:35:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.237 13:35:23 -- nvmf/common.sh@46 -- # : 0 00:21:18.237 13:35:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:18.237 13:35:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:18.237 13:35:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:18.237 13:35:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:18.237 13:35:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:18.237 13:35:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:18.237 13:35:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:18.237 13:35:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:18.237 13:35:23 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:18.237 13:35:23 -- host/fio.sh@14 -- # nvmftestinit 00:21:18.237 13:35:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:18.237 13:35:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:18.237 13:35:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:18.237 13:35:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:18.237 13:35:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:18.237 13:35:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.237 13:35:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:18.237 13:35:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.495 13:35:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:18.495 13:35:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:18.495 13:35:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:18.495 13:35:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:18.495 13:35:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:18.495 13:35:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:18.495 13:35:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:18.495 13:35:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:18.495 13:35:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:18.495 13:35:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:18.495 13:35:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:18.495 13:35:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:18.495 13:35:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:18.495 13:35:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:18.495 13:35:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:18.495 13:35:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:18.495 13:35:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:18.495 13:35:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:18.495 13:35:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:18.495 13:35:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:18.495 Cannot find device "nvmf_tgt_br" 00:21:18.495 13:35:23 -- nvmf/common.sh@154 -- # true 00:21:18.495 13:35:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:18.495 Cannot find device "nvmf_tgt_br2" 00:21:18.495 13:35:23 -- nvmf/common.sh@155 -- # true 00:21:18.495 13:35:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:18.495 13:35:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:18.495 Cannot find device "nvmf_tgt_br" 00:21:18.495 13:35:23 -- nvmf/common.sh@157 -- # true 00:21:18.495 13:35:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:18.495 Cannot find device "nvmf_tgt_br2" 00:21:18.495 13:35:23 -- nvmf/common.sh@158 -- # true 00:21:18.495 13:35:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:18.495 13:35:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:18.496 13:35:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:18.496 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:18.496 13:35:24 -- nvmf/common.sh@161 -- # true 00:21:18.496 13:35:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:18.496 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:18.496 13:35:24 -- nvmf/common.sh@162 -- # true 00:21:18.496 13:35:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:18.496 13:35:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:18.496 13:35:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:18.496 13:35:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:18.496 13:35:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:18.496 13:35:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:18.496 13:35:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:18.496 13:35:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:18.496 13:35:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:18.496 13:35:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:18.496 13:35:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:18.496 13:35:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:18.496 13:35:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:18.496 13:35:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:18.496 13:35:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:18.496 13:35:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:18.754 13:35:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:18.754 13:35:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:18.754 13:35:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:18.754 13:35:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:18.754 13:35:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:18.754 13:35:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:18.754 13:35:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:18.754 13:35:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:18.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:18.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:21:18.754 00:21:18.754 --- 10.0.0.2 ping statistics --- 00:21:18.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.754 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:21:18.754 13:35:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:18.754 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:18.754 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:21:18.754 00:21:18.754 --- 10.0.0.3 ping statistics --- 00:21:18.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.754 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:21:18.754 13:35:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:18.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:18.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:21:18.755 00:21:18.755 --- 10.0.0.1 ping statistics --- 00:21:18.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.755 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:21:18.755 13:35:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:18.755 13:35:24 -- nvmf/common.sh@421 -- # return 0 00:21:18.755 13:35:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:18.755 13:35:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:18.755 13:35:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:18.755 13:35:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:18.755 13:35:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:18.755 13:35:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:18.755 13:35:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:18.755 13:35:24 -- host/fio.sh@16 -- # [[ y != y ]] 00:21:18.755 13:35:24 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:18.755 13:35:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:18.755 13:35:24 -- common/autotest_common.sh@10 -- # set +x 00:21:18.755 13:35:24 -- host/fio.sh@24 -- # nvmfpid=94702 00:21:18.755 13:35:24 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:18.755 13:35:24 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:18.755 13:35:24 -- host/fio.sh@28 -- # waitforlisten 94702 00:21:18.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.755 13:35:24 -- common/autotest_common.sh@829 -- # '[' -z 94702 ']' 00:21:18.755 13:35:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.755 13:35:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:18.755 13:35:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.755 13:35:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:18.755 13:35:24 -- common/autotest_common.sh@10 -- # set +x 00:21:18.755 [2024-12-15 13:35:24.339045] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:18.755 [2024-12-15 13:35:24.339131] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.014 [2024-12-15 13:35:24.481935] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:19.014 [2024-12-15 13:35:24.552518] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:19.014 [2024-12-15 13:35:24.553008] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.014 [2024-12-15 13:35:24.553152] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.014 [2024-12-15 13:35:24.553318] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.014 [2024-12-15 13:35:24.553572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.014 [2024-12-15 13:35:24.553686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.014 [2024-12-15 13:35:24.553753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:19.014 [2024-12-15 13:35:24.553756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.956 13:35:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:19.956 13:35:25 -- common/autotest_common.sh@862 -- # return 0 00:21:19.956 13:35:25 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:19.956 [2024-12-15 13:35:25.570431] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.956 13:35:25 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:19.956 13:35:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:19.956 13:35:25 -- common/autotest_common.sh@10 -- # set +x 00:21:19.956 13:35:25 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:20.522 Malloc1 00:21:20.522 13:35:25 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:20.781 13:35:26 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:20.781 13:35:26 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:21.040 [2024-12-15 13:35:26.700949] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.040 13:35:26 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:21.607 13:35:27 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:21:21.607 13:35:27 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:21.607 13:35:27 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:21.607 13:35:27 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:21.607 13:35:27 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:21.607 13:35:27 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:21.607 13:35:27 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:21.607 13:35:27 -- common/autotest_common.sh@1330 -- # shift 00:21:21.607 13:35:27 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:21.607 13:35:27 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:21.607 13:35:27 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:21.607 13:35:27 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:21.607 13:35:27 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:21.607 13:35:27 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:21.607 13:35:27 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:21.607 13:35:27 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:21.607 13:35:27 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:21.607 13:35:27 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:21.607 13:35:27 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:21.607 13:35:27 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:21.607 13:35:27 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:21.607 13:35:27 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:21.607 13:35:27 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:21.607 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:21.607 fio-3.35 00:21:21.607 Starting 1 thread 00:21:24.138 00:21:24.138 test: (groupid=0, jobs=1): err= 0: pid=94828: Sun Dec 15 13:35:29 2024 00:21:24.138 read: IOPS=10.4k, BW=40.7MiB/s (42.7MB/s)(81.7MiB/2006msec) 00:21:24.138 slat (nsec): min=1757, max=335624, avg=2635.52, stdev=3452.56 00:21:24.138 clat (usec): min=3223, max=13753, avg=6524.44, stdev=669.31 00:21:24.138 lat (usec): min=3258, max=13759, avg=6527.07, stdev=669.35 00:21:24.138 clat percentiles (usec): 00:21:24.138 | 1.00th=[ 5342], 5.00th=[ 5669], 10.00th=[ 5800], 20.00th=[ 5997], 00:21:24.138 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6587], 00:21:24.138 | 70.00th=[ 6783], 80.00th=[ 6980], 90.00th=[ 7308], 95.00th=[ 7570], 00:21:24.138 | 99.00th=[ 8455], 99.50th=[ 9372], 99.90th=[12125], 99.95th=[12387], 00:21:24.138 | 99.99th=[13566] 00:21:24.138 bw ( KiB/s): min=39832, max=43080, per=99.96%, avg=41676.00, stdev=1462.31, samples=4 00:21:24.138 iops : min= 9958, max=10770, avg=10419.00, stdev=365.58, samples=4 00:21:24.138 write: IOPS=10.4k, BW=40.7MiB/s (42.7MB/s)(81.7MiB/2006msec); 0 zone resets 00:21:24.138 slat (nsec): min=1831, max=267899, avg=2768.80, stdev=2833.50 00:21:24.138 clat (usec): min=2465, max=11287, avg=5703.56, stdev=538.10 00:21:24.138 lat (usec): min=2479, max=11289, avg=5706.33, stdev=538.19 00:21:24.138 clat percentiles (usec): 00:21:24.138 | 1.00th=[ 4686], 5.00th=[ 4948], 10.00th=[ 5145], 20.00th=[ 5276], 00:21:24.138 | 30.00th=[ 5473], 40.00th=[ 5538], 50.00th=[ 5669], 60.00th=[ 5800], 00:21:24.138 | 70.00th=[ 5932], 80.00th=[ 6063], 90.00th=[ 6325], 95.00th=[ 6521], 00:21:24.138 | 99.00th=[ 7308], 99.50th=[ 7963], 99.90th=[ 9634], 99.95th=[10421], 00:21:24.138 | 99.99th=[11207] 00:21:24.138 bw ( KiB/s): min=40512, max=42816, per=100.00%, avg=41712.00, stdev=1117.56, samples=4 00:21:24.138 iops : min=10128, max=10704, avg=10428.00, stdev=279.39, samples=4 00:21:24.138 lat (msec) : 4=0.07%, 10=99.73%, 20=0.20% 00:21:24.138 cpu : usr=62.89%, sys=26.38%, ctx=9, majf=0, minf=5 00:21:24.138 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:24.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:24.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:24.138 issued rwts: total=20908,20913,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:24.138 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:24.138 00:21:24.138 Run status group 0 (all jobs): 00:21:24.138 READ: bw=40.7MiB/s (42.7MB/s), 40.7MiB/s-40.7MiB/s (42.7MB/s-42.7MB/s), io=81.7MiB (85.6MB), run=2006-2006msec 00:21:24.138 WRITE: bw=40.7MiB/s (42.7MB/s), 40.7MiB/s-40.7MiB/s (42.7MB/s-42.7MB/s), io=81.7MiB (85.7MB), run=2006-2006msec 00:21:24.138 13:35:29 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:24.138 13:35:29 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:24.138 13:35:29 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:24.138 13:35:29 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:24.138 13:35:29 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:24.138 13:35:29 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:24.138 13:35:29 -- common/autotest_common.sh@1330 -- # shift 00:21:24.138 13:35:29 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:24.138 13:35:29 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:24.138 13:35:29 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:24.138 13:35:29 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:24.138 13:35:29 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:24.138 13:35:29 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:24.138 13:35:29 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:24.138 13:35:29 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:24.139 13:35:29 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:24.139 13:35:29 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:24.139 13:35:29 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:24.139 13:35:29 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:24.139 13:35:29 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:24.139 13:35:29 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:24.139 13:35:29 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:24.139 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:24.139 fio-3.35 00:21:24.139 Starting 1 thread 00:21:26.670 00:21:26.670 test: (groupid=0, jobs=1): err= 0: pid=94877: Sun Dec 15 13:35:31 2024 00:21:26.670 read: IOPS=9090, BW=142MiB/s (149MB/s)(285MiB/2003msec) 00:21:26.670 slat (usec): min=2, max=107, avg= 3.54, stdev= 2.24 00:21:26.670 clat (usec): min=2314, max=16193, avg=8445.16, stdev=2096.97 00:21:26.670 lat (usec): min=2317, max=16196, avg=8448.69, stdev=2097.12 00:21:26.670 clat percentiles (usec): 00:21:26.670 | 1.00th=[ 4293], 5.00th=[ 5276], 10.00th=[ 5800], 20.00th=[ 6587], 00:21:26.670 | 30.00th=[ 7242], 40.00th=[ 7767], 50.00th=[ 8356], 60.00th=[ 8979], 00:21:26.670 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10945], 95.00th=[11994], 00:21:26.670 | 99.00th=[13829], 99.50th=[14484], 99.90th=[15795], 99.95th=[15926], 00:21:26.670 | 99.99th=[16188] 00:21:26.670 bw ( KiB/s): min=66240, max=74912, per=49.02%, avg=71296.00, stdev=3665.27, samples=4 00:21:26.670 iops : min= 4140, max= 4682, avg=4456.00, stdev=229.08, samples=4 00:21:26.670 write: IOPS=5327, BW=83.2MiB/s (87.3MB/s)(145MiB/1745msec); 0 zone resets 00:21:26.670 slat (usec): min=31, max=349, avg=35.80, stdev= 9.22 00:21:26.670 clat (usec): min=4279, max=16861, avg=10100.21, stdev=1841.34 00:21:26.670 lat (usec): min=4311, max=16895, avg=10136.02, stdev=1843.52 00:21:26.670 clat percentiles (usec): 00:21:26.670 | 1.00th=[ 6849], 5.00th=[ 7635], 10.00th=[ 8029], 20.00th=[ 8586], 00:21:26.670 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10290], 00:21:26.670 | 70.00th=[10814], 80.00th=[11600], 90.00th=[12780], 95.00th=[13566], 00:21:26.670 | 99.00th=[15139], 99.50th=[15533], 99.90th=[16319], 99.95th=[16581], 00:21:26.670 | 99.99th=[16909] 00:21:26.670 bw ( KiB/s): min=68864, max=78848, per=87.25%, avg=74368.00, stdev=4276.04, samples=4 00:21:26.670 iops : min= 4304, max= 4928, avg=4648.00, stdev=267.25, samples=4 00:21:26.670 lat (msec) : 4=0.43%, 10=67.65%, 20=31.92% 00:21:26.670 cpu : usr=73.03%, sys=17.48%, ctx=9, majf=0, minf=1 00:21:26.670 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:21:26.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:26.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:26.670 issued rwts: total=18208,9296,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:26.670 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:26.670 00:21:26.670 Run status group 0 (all jobs): 00:21:26.670 READ: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=285MiB (298MB), run=2003-2003msec 00:21:26.670 WRITE: bw=83.2MiB/s (87.3MB/s), 83.2MiB/s-83.2MiB/s (87.3MB/s-87.3MB/s), io=145MiB (152MB), run=1745-1745msec 00:21:26.670 13:35:31 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:26.670 13:35:32 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:21:26.670 13:35:32 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:21:26.670 13:35:32 -- host/fio.sh@51 -- # get_nvme_bdfs 00:21:26.670 13:35:32 -- common/autotest_common.sh@1508 -- # bdfs=() 00:21:26.670 13:35:32 -- common/autotest_common.sh@1508 -- # local bdfs 00:21:26.670 13:35:32 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:26.670 13:35:32 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:26.671 13:35:32 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:21:26.671 13:35:32 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:21:26.671 13:35:32 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:21:26.671 13:35:32 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:21:27.238 Nvme0n1 00:21:27.238 13:35:32 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:21:27.238 13:35:32 -- host/fio.sh@53 -- # ls_guid=7bc76cbe-7c72-4c23-882f-09d4ee8e804a 00:21:27.238 13:35:32 -- host/fio.sh@54 -- # get_lvs_free_mb 7bc76cbe-7c72-4c23-882f-09d4ee8e804a 00:21:27.238 13:35:32 -- common/autotest_common.sh@1353 -- # local lvs_uuid=7bc76cbe-7c72-4c23-882f-09d4ee8e804a 00:21:27.238 13:35:32 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:27.238 13:35:32 -- common/autotest_common.sh@1355 -- # local fc 00:21:27.238 13:35:32 -- common/autotest_common.sh@1356 -- # local cs 00:21:27.238 13:35:32 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:27.497 13:35:33 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:27.497 { 00:21:27.497 "base_bdev": "Nvme0n1", 00:21:27.497 "block_size": 4096, 00:21:27.497 "cluster_size": 1073741824, 00:21:27.497 "free_clusters": 4, 00:21:27.497 "name": "lvs_0", 00:21:27.497 "total_data_clusters": 4, 00:21:27.497 "uuid": "7bc76cbe-7c72-4c23-882f-09d4ee8e804a" 00:21:27.497 } 00:21:27.497 ]' 00:21:27.497 13:35:33 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="7bc76cbe-7c72-4c23-882f-09d4ee8e804a") .free_clusters' 00:21:27.497 13:35:33 -- common/autotest_common.sh@1358 -- # fc=4 00:21:27.497 13:35:33 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="7bc76cbe-7c72-4c23-882f-09d4ee8e804a") .cluster_size' 00:21:27.755 13:35:33 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:21:27.755 13:35:33 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:21:27.755 4096 00:21:27.755 13:35:33 -- common/autotest_common.sh@1363 -- # echo 4096 00:21:27.755 13:35:33 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:21:28.028 495be87b-8124-49c5-98d2-d42f614c795e 00:21:28.028 13:35:33 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:21:28.301 13:35:33 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:21:28.301 13:35:33 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:28.559 13:35:34 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:28.559 13:35:34 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:28.559 13:35:34 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:28.559 13:35:34 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:28.559 13:35:34 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:28.559 13:35:34 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:28.559 13:35:34 -- common/autotest_common.sh@1330 -- # shift 00:21:28.559 13:35:34 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:28.559 13:35:34 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:28.559 13:35:34 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:28.559 13:35:34 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:28.559 13:35:34 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:28.559 13:35:34 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:28.559 13:35:34 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:28.559 13:35:34 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:28.818 13:35:34 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:28.818 13:35:34 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:28.818 13:35:34 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:28.818 13:35:34 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:28.818 13:35:34 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:28.818 13:35:34 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:28.818 13:35:34 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:28.818 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:28.818 fio-3.35 00:21:28.818 Starting 1 thread 00:21:31.349 00:21:31.349 test: (groupid=0, jobs=1): err= 0: pid=95029: Sun Dec 15 13:35:36 2024 00:21:31.349 read: IOPS=6650, BW=26.0MiB/s (27.2MB/s)(52.1MiB/2007msec) 00:21:31.349 slat (nsec): min=1728, max=332881, avg=2843.84, stdev=4215.03 00:21:31.349 clat (usec): min=4116, max=17873, avg=10229.52, stdev=965.52 00:21:31.349 lat (usec): min=4125, max=17875, avg=10232.36, stdev=965.34 00:21:31.349 clat percentiles (usec): 00:21:31.349 | 1.00th=[ 8160], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9503], 00:21:31.349 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 00:21:31.349 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11469], 95.00th=[11731], 00:21:31.349 | 99.00th=[12518], 99.50th=[12911], 99.90th=[15401], 99.95th=[16712], 00:21:31.349 | 99.99th=[17695] 00:21:31.349 bw ( KiB/s): min=25928, max=27152, per=99.82%, avg=26554.00, stdev=509.47, samples=4 00:21:31.349 iops : min= 6482, max= 6788, avg=6638.50, stdev=127.37, samples=4 00:21:31.349 write: IOPS=6657, BW=26.0MiB/s (27.3MB/s)(52.2MiB/2007msec); 0 zone resets 00:21:31.349 slat (nsec): min=1873, max=267374, avg=3000.07, stdev=3302.25 00:21:31.349 clat (usec): min=2420, max=15298, avg=8922.14, stdev=820.64 00:21:31.349 lat (usec): min=2434, max=15300, avg=8925.14, stdev=820.50 00:21:31.349 clat percentiles (usec): 00:21:31.349 | 1.00th=[ 6980], 5.00th=[ 7635], 10.00th=[ 7963], 20.00th=[ 8291], 00:21:31.349 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9110], 00:21:31.349 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[ 9896], 95.00th=[10159], 00:21:31.349 | 99.00th=[10683], 99.50th=[10945], 99.90th=[13042], 99.95th=[14877], 00:21:31.349 | 99.99th=[15270] 00:21:31.349 bw ( KiB/s): min=26184, max=26840, per=99.93%, avg=26610.00, stdev=292.22, samples=4 00:21:31.349 iops : min= 6546, max= 6710, avg=6652.50, stdev=73.05, samples=4 00:21:31.349 lat (msec) : 4=0.03%, 10=66.66%, 20=33.31% 00:21:31.349 cpu : usr=69.74%, sys=22.38%, ctx=5, majf=0, minf=5 00:21:31.349 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:31.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:31.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:31.349 issued rwts: total=13348,13361,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:31.349 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:31.349 00:21:31.349 Run status group 0 (all jobs): 00:21:31.349 READ: bw=26.0MiB/s (27.2MB/s), 26.0MiB/s-26.0MiB/s (27.2MB/s-27.2MB/s), io=52.1MiB (54.7MB), run=2007-2007msec 00:21:31.350 WRITE: bw=26.0MiB/s (27.3MB/s), 26.0MiB/s-26.0MiB/s (27.3MB/s-27.3MB/s), io=52.2MiB (54.7MB), run=2007-2007msec 00:21:31.350 13:35:36 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:31.350 13:35:36 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:21:31.608 13:35:37 -- host/fio.sh@64 -- # ls_nested_guid=f6a0f6c0-f424-49fb-ad15-7089983a3a86 00:21:31.608 13:35:37 -- host/fio.sh@65 -- # get_lvs_free_mb f6a0f6c0-f424-49fb-ad15-7089983a3a86 00:21:31.608 13:35:37 -- common/autotest_common.sh@1353 -- # local lvs_uuid=f6a0f6c0-f424-49fb-ad15-7089983a3a86 00:21:31.608 13:35:37 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:31.608 13:35:37 -- common/autotest_common.sh@1355 -- # local fc 00:21:31.608 13:35:37 -- common/autotest_common.sh@1356 -- # local cs 00:21:31.608 13:35:37 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:31.867 13:35:37 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:31.867 { 00:21:31.867 "base_bdev": "Nvme0n1", 00:21:31.867 "block_size": 4096, 00:21:31.867 "cluster_size": 1073741824, 00:21:31.867 "free_clusters": 0, 00:21:31.867 "name": "lvs_0", 00:21:31.867 "total_data_clusters": 4, 00:21:31.867 "uuid": "7bc76cbe-7c72-4c23-882f-09d4ee8e804a" 00:21:31.867 }, 00:21:31.867 { 00:21:31.867 "base_bdev": "495be87b-8124-49c5-98d2-d42f614c795e", 00:21:31.867 "block_size": 4096, 00:21:31.867 "cluster_size": 4194304, 00:21:31.867 "free_clusters": 1022, 00:21:31.867 "name": "lvs_n_0", 00:21:31.867 "total_data_clusters": 1022, 00:21:31.867 "uuid": "f6a0f6c0-f424-49fb-ad15-7089983a3a86" 00:21:31.867 } 00:21:31.867 ]' 00:21:31.867 13:35:37 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="f6a0f6c0-f424-49fb-ad15-7089983a3a86") .free_clusters' 00:21:32.126 13:35:37 -- common/autotest_common.sh@1358 -- # fc=1022 00:21:32.126 13:35:37 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="f6a0f6c0-f424-49fb-ad15-7089983a3a86") .cluster_size' 00:21:32.126 13:35:37 -- common/autotest_common.sh@1359 -- # cs=4194304 00:21:32.126 13:35:37 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:21:32.126 4088 00:21:32.126 13:35:37 -- common/autotest_common.sh@1363 -- # echo 4088 00:21:32.126 13:35:37 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:21:32.384 25017f14-b9a3-4bd1-993f-b398ff5d1902 00:21:32.384 13:35:37 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:21:32.642 13:35:38 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:21:32.901 13:35:38 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:33.160 13:35:38 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:33.160 13:35:38 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:33.160 13:35:38 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:33.160 13:35:38 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:33.160 13:35:38 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:33.160 13:35:38 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:33.160 13:35:38 -- common/autotest_common.sh@1330 -- # shift 00:21:33.160 13:35:38 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:33.160 13:35:38 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:33.160 13:35:38 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:33.160 13:35:38 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:33.160 13:35:38 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:33.160 13:35:38 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:33.160 13:35:38 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:33.160 13:35:38 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:33.160 13:35:38 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:33.160 13:35:38 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:33.160 13:35:38 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:33.160 13:35:38 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:33.160 13:35:38 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:33.160 13:35:38 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:33.160 13:35:38 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:33.160 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:33.160 fio-3.35 00:21:33.160 Starting 1 thread 00:21:35.695 00:21:35.695 test: (groupid=0, jobs=1): err= 0: pid=95155: Sun Dec 15 13:35:41 2024 00:21:35.695 read: IOPS=5662, BW=22.1MiB/s (23.2MB/s)(44.5MiB/2010msec) 00:21:35.695 slat (nsec): min=1839, max=293555, avg=2911.71, stdev=4385.81 00:21:35.695 clat (usec): min=4275, max=21947, avg=12079.56, stdev=1314.44 00:21:35.695 lat (usec): min=4284, max=21949, avg=12082.47, stdev=1314.25 00:21:35.695 clat percentiles (usec): 00:21:35.695 | 1.00th=[ 9372], 5.00th=[10159], 10.00th=[10552], 20.00th=[11076], 00:21:35.695 | 30.00th=[11338], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:21:35.695 | 70.00th=[12649], 80.00th=[13173], 90.00th=[13698], 95.00th=[14222], 00:21:35.695 | 99.00th=[15270], 99.50th=[15664], 99.90th=[19792], 99.95th=[20841], 00:21:35.695 | 99.99th=[21365] 00:21:35.695 bw ( KiB/s): min=22120, max=23272, per=99.91%, avg=22630.00, stdev=504.71, samples=4 00:21:35.695 iops : min= 5530, max= 5818, avg=5657.50, stdev=126.18, samples=4 00:21:35.695 write: IOPS=5630, BW=22.0MiB/s (23.1MB/s)(44.2MiB/2010msec); 0 zone resets 00:21:35.695 slat (nsec): min=1936, max=206916, avg=3085.02, stdev=3441.65 00:21:35.695 clat (usec): min=2075, max=21111, avg=10492.36, stdev=1145.68 00:21:35.695 lat (usec): min=2086, max=21113, avg=10495.44, stdev=1145.53 00:21:35.695 clat percentiles (usec): 00:21:35.695 | 1.00th=[ 8160], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9634], 00:21:35.695 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10683], 00:21:35.695 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11863], 95.00th=[12256], 00:21:35.695 | 99.00th=[13042], 99.50th=[13435], 99.90th=[19006], 99.95th=[20055], 00:21:35.695 | 99.99th=[21103] 00:21:35.695 bw ( KiB/s): min=21568, max=24328, per=99.97%, avg=22514.00, stdev=1237.60, samples=4 00:21:35.695 iops : min= 5392, max= 6082, avg=5628.50, stdev=309.40, samples=4 00:21:35.695 lat (msec) : 4=0.04%, 10=18.22%, 20=81.66%, 50=0.08% 00:21:35.695 cpu : usr=71.58%, sys=21.45%, ctx=4, majf=0, minf=5 00:21:35.695 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:21:35.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.696 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:35.696 issued rwts: total=11382,11317,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.696 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.696 00:21:35.696 Run status group 0 (all jobs): 00:21:35.696 READ: bw=22.1MiB/s (23.2MB/s), 22.1MiB/s-22.1MiB/s (23.2MB/s-23.2MB/s), io=44.5MiB (46.6MB), run=2010-2010msec 00:21:35.696 WRITE: bw=22.0MiB/s (23.1MB/s), 22.0MiB/s-22.0MiB/s (23.1MB/s-23.1MB/s), io=44.2MiB (46.4MB), run=2010-2010msec 00:21:35.696 13:35:41 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:35.696 13:35:41 -- host/fio.sh@74 -- # sync 00:21:35.955 13:35:41 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:21:36.213 13:35:41 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:36.472 13:35:41 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:21:36.731 13:35:42 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:36.731 13:35:42 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:21:37.677 13:35:43 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:37.677 13:35:43 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:37.677 13:35:43 -- host/fio.sh@86 -- # nvmftestfini 00:21:37.677 13:35:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:37.677 13:35:43 -- nvmf/common.sh@116 -- # sync 00:21:37.677 13:35:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:37.677 13:35:43 -- nvmf/common.sh@119 -- # set +e 00:21:37.677 13:35:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:37.677 13:35:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:37.677 rmmod nvme_tcp 00:21:37.677 rmmod nvme_fabrics 00:21:37.677 rmmod nvme_keyring 00:21:37.677 13:35:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:37.936 13:35:43 -- nvmf/common.sh@123 -- # set -e 00:21:37.936 13:35:43 -- nvmf/common.sh@124 -- # return 0 00:21:37.936 13:35:43 -- nvmf/common.sh@477 -- # '[' -n 94702 ']' 00:21:37.936 13:35:43 -- nvmf/common.sh@478 -- # killprocess 94702 00:21:37.936 13:35:43 -- common/autotest_common.sh@936 -- # '[' -z 94702 ']' 00:21:37.936 13:35:43 -- common/autotest_common.sh@940 -- # kill -0 94702 00:21:37.936 13:35:43 -- common/autotest_common.sh@941 -- # uname 00:21:37.936 13:35:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:37.936 13:35:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94702 00:21:37.936 killing process with pid 94702 00:21:37.936 13:35:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:37.936 13:35:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:37.936 13:35:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94702' 00:21:37.936 13:35:43 -- common/autotest_common.sh@955 -- # kill 94702 00:21:37.936 13:35:43 -- common/autotest_common.sh@960 -- # wait 94702 00:21:38.195 13:35:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:38.195 13:35:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:38.195 13:35:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:38.195 13:35:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:38.195 13:35:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:38.195 13:35:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.195 13:35:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:38.195 13:35:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.195 13:35:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:38.195 00:21:38.195 real 0m20.053s 00:21:38.195 user 1m27.567s 00:21:38.195 sys 0m4.582s 00:21:38.195 13:35:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:38.195 13:35:43 -- common/autotest_common.sh@10 -- # set +x 00:21:38.195 ************************************ 00:21:38.195 END TEST nvmf_fio_host 00:21:38.195 ************************************ 00:21:38.195 13:35:43 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:38.195 13:35:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:38.195 13:35:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:38.195 13:35:43 -- common/autotest_common.sh@10 -- # set +x 00:21:38.195 ************************************ 00:21:38.195 START TEST nvmf_failover 00:21:38.195 ************************************ 00:21:38.195 13:35:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:38.195 * Looking for test storage... 00:21:38.454 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:38.454 13:35:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:38.454 13:35:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:38.454 13:35:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:38.455 13:35:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:38.455 13:35:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:38.455 13:35:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:38.455 13:35:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:38.455 13:35:43 -- scripts/common.sh@335 -- # IFS=.-: 00:21:38.455 13:35:43 -- scripts/common.sh@335 -- # read -ra ver1 00:21:38.455 13:35:43 -- scripts/common.sh@336 -- # IFS=.-: 00:21:38.455 13:35:43 -- scripts/common.sh@336 -- # read -ra ver2 00:21:38.455 13:35:43 -- scripts/common.sh@337 -- # local 'op=<' 00:21:38.455 13:35:43 -- scripts/common.sh@339 -- # ver1_l=2 00:21:38.455 13:35:43 -- scripts/common.sh@340 -- # ver2_l=1 00:21:38.455 13:35:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:38.455 13:35:43 -- scripts/common.sh@343 -- # case "$op" in 00:21:38.455 13:35:43 -- scripts/common.sh@344 -- # : 1 00:21:38.455 13:35:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:38.455 13:35:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:38.455 13:35:43 -- scripts/common.sh@364 -- # decimal 1 00:21:38.455 13:35:43 -- scripts/common.sh@352 -- # local d=1 00:21:38.455 13:35:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:38.455 13:35:43 -- scripts/common.sh@354 -- # echo 1 00:21:38.455 13:35:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:38.455 13:35:43 -- scripts/common.sh@365 -- # decimal 2 00:21:38.455 13:35:43 -- scripts/common.sh@352 -- # local d=2 00:21:38.455 13:35:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:38.455 13:35:43 -- scripts/common.sh@354 -- # echo 2 00:21:38.455 13:35:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:38.455 13:35:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:38.455 13:35:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:38.455 13:35:43 -- scripts/common.sh@367 -- # return 0 00:21:38.455 13:35:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:38.455 13:35:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:38.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.455 --rc genhtml_branch_coverage=1 00:21:38.455 --rc genhtml_function_coverage=1 00:21:38.455 --rc genhtml_legend=1 00:21:38.455 --rc geninfo_all_blocks=1 00:21:38.455 --rc geninfo_unexecuted_blocks=1 00:21:38.455 00:21:38.455 ' 00:21:38.455 13:35:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:38.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.455 --rc genhtml_branch_coverage=1 00:21:38.455 --rc genhtml_function_coverage=1 00:21:38.455 --rc genhtml_legend=1 00:21:38.455 --rc geninfo_all_blocks=1 00:21:38.455 --rc geninfo_unexecuted_blocks=1 00:21:38.455 00:21:38.455 ' 00:21:38.455 13:35:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:38.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.455 --rc genhtml_branch_coverage=1 00:21:38.455 --rc genhtml_function_coverage=1 00:21:38.455 --rc genhtml_legend=1 00:21:38.455 --rc geninfo_all_blocks=1 00:21:38.455 --rc geninfo_unexecuted_blocks=1 00:21:38.455 00:21:38.455 ' 00:21:38.455 13:35:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:38.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.455 --rc genhtml_branch_coverage=1 00:21:38.455 --rc genhtml_function_coverage=1 00:21:38.455 --rc genhtml_legend=1 00:21:38.455 --rc geninfo_all_blocks=1 00:21:38.455 --rc geninfo_unexecuted_blocks=1 00:21:38.455 00:21:38.455 ' 00:21:38.455 13:35:43 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:38.455 13:35:43 -- nvmf/common.sh@7 -- # uname -s 00:21:38.455 13:35:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:38.455 13:35:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:38.455 13:35:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:38.455 13:35:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:38.455 13:35:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:38.455 13:35:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:38.455 13:35:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:38.455 13:35:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:38.455 13:35:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:38.455 13:35:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:38.455 13:35:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:21:38.455 13:35:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:21:38.455 13:35:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:38.455 13:35:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:38.455 13:35:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:38.455 13:35:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:38.455 13:35:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:38.455 13:35:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:38.455 13:35:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:38.455 13:35:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.455 13:35:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.455 13:35:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.455 13:35:43 -- paths/export.sh@5 -- # export PATH 00:21:38.455 13:35:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.455 13:35:43 -- nvmf/common.sh@46 -- # : 0 00:21:38.455 13:35:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:38.455 13:35:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:38.455 13:35:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:38.455 13:35:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:38.455 13:35:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:38.455 13:35:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:38.455 13:35:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:38.455 13:35:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:38.455 13:35:43 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:38.455 13:35:43 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:38.455 13:35:43 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:38.455 13:35:43 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:38.455 13:35:43 -- host/failover.sh@18 -- # nvmftestinit 00:21:38.456 13:35:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:38.456 13:35:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:38.456 13:35:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:38.456 13:35:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:38.456 13:35:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:38.456 13:35:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.456 13:35:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:38.456 13:35:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.456 13:35:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:38.456 13:35:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:38.456 13:35:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:38.456 13:35:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:38.456 13:35:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:38.456 13:35:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:38.456 13:35:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:38.456 13:35:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:38.456 13:35:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:38.456 13:35:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:38.456 13:35:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:38.456 13:35:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:38.456 13:35:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:38.456 13:35:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:38.456 13:35:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:38.456 13:35:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:38.456 13:35:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:38.456 13:35:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:38.456 13:35:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:38.456 13:35:44 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:38.456 Cannot find device "nvmf_tgt_br" 00:21:38.456 13:35:44 -- nvmf/common.sh@154 -- # true 00:21:38.456 13:35:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:38.456 Cannot find device "nvmf_tgt_br2" 00:21:38.456 13:35:44 -- nvmf/common.sh@155 -- # true 00:21:38.456 13:35:44 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:38.456 13:35:44 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:38.456 Cannot find device "nvmf_tgt_br" 00:21:38.456 13:35:44 -- nvmf/common.sh@157 -- # true 00:21:38.456 13:35:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:38.456 Cannot find device "nvmf_tgt_br2" 00:21:38.456 13:35:44 -- nvmf/common.sh@158 -- # true 00:21:38.456 13:35:44 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:38.456 13:35:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:38.456 13:35:44 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:38.456 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:38.456 13:35:44 -- nvmf/common.sh@161 -- # true 00:21:38.456 13:35:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:38.456 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:38.456 13:35:44 -- nvmf/common.sh@162 -- # true 00:21:38.456 13:35:44 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:38.456 13:35:44 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:38.715 13:35:44 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:38.715 13:35:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:38.715 13:35:44 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:38.715 13:35:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:38.715 13:35:44 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:38.715 13:35:44 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:38.715 13:35:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:38.715 13:35:44 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:38.715 13:35:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:38.715 13:35:44 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:38.715 13:35:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:38.715 13:35:44 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:38.715 13:35:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:38.715 13:35:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:38.715 13:35:44 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:38.715 13:35:44 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:38.715 13:35:44 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:38.715 13:35:44 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:38.715 13:35:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:38.715 13:35:44 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:38.715 13:35:44 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:38.715 13:35:44 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:38.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:38.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:21:38.715 00:21:38.715 --- 10.0.0.2 ping statistics --- 00:21:38.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.715 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:21:38.715 13:35:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:38.715 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:38.715 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:21:38.715 00:21:38.715 --- 10.0.0.3 ping statistics --- 00:21:38.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.715 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:21:38.715 13:35:44 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:38.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:38.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:21:38.715 00:21:38.715 --- 10.0.0.1 ping statistics --- 00:21:38.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.715 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:21:38.715 13:35:44 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:38.715 13:35:44 -- nvmf/common.sh@421 -- # return 0 00:21:38.715 13:35:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:38.715 13:35:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:38.715 13:35:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:38.715 13:35:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:38.715 13:35:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:38.715 13:35:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:38.715 13:35:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:38.715 13:35:44 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:38.715 13:35:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:38.715 13:35:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:38.715 13:35:44 -- common/autotest_common.sh@10 -- # set +x 00:21:38.715 13:35:44 -- nvmf/common.sh@469 -- # nvmfpid=95434 00:21:38.715 13:35:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:38.715 13:35:44 -- nvmf/common.sh@470 -- # waitforlisten 95434 00:21:38.715 13:35:44 -- common/autotest_common.sh@829 -- # '[' -z 95434 ']' 00:21:38.715 13:35:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.715 13:35:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:38.715 13:35:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.715 13:35:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:38.715 13:35:44 -- common/autotest_common.sh@10 -- # set +x 00:21:38.715 [2024-12-15 13:35:44.392776] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:38.715 [2024-12-15 13:35:44.392856] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.974 [2024-12-15 13:35:44.532939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:38.974 [2024-12-15 13:35:44.598016] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:38.974 [2024-12-15 13:35:44.598141] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.974 [2024-12-15 13:35:44.598154] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.974 [2024-12-15 13:35:44.598162] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.974 [2024-12-15 13:35:44.598299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:38.974 [2024-12-15 13:35:44.599247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:38.974 [2024-12-15 13:35:44.599296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.909 13:35:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:39.909 13:35:45 -- common/autotest_common.sh@862 -- # return 0 00:21:39.909 13:35:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:39.909 13:35:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:39.909 13:35:45 -- common/autotest_common.sh@10 -- # set +x 00:21:39.909 13:35:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:39.909 13:35:45 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:40.167 [2024-12-15 13:35:45.619356] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.167 13:35:45 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:40.426 Malloc0 00:21:40.426 13:35:45 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:40.685 13:35:46 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:40.685 13:35:46 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:40.944 [2024-12-15 13:35:46.595035] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.944 13:35:46 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:41.202 [2024-12-15 13:35:46.879224] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:41.461 13:35:46 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:41.461 [2024-12-15 13:35:47.095409] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:41.461 13:35:47 -- host/failover.sh@31 -- # bdevperf_pid=95551 00:21:41.461 13:35:47 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:41.461 13:35:47 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:41.461 13:35:47 -- host/failover.sh@34 -- # waitforlisten 95551 /var/tmp/bdevperf.sock 00:21:41.461 13:35:47 -- common/autotest_common.sh@829 -- # '[' -z 95551 ']' 00:21:41.461 13:35:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:41.461 13:35:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:41.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:41.461 13:35:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:41.461 13:35:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:41.461 13:35:47 -- common/autotest_common.sh@10 -- # set +x 00:21:42.839 13:35:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:42.839 13:35:48 -- common/autotest_common.sh@862 -- # return 0 00:21:42.839 13:35:48 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:42.839 NVMe0n1 00:21:42.839 13:35:48 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:43.098 00:21:43.098 13:35:48 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:43.098 13:35:48 -- host/failover.sh@39 -- # run_test_pid=95593 00:21:43.098 13:35:48 -- host/failover.sh@41 -- # sleep 1 00:21:44.476 13:35:49 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:44.476 [2024-12-15 13:35:50.001475] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001539] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001553] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001562] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001570] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001579] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001601] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001612] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001626] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001635] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001643] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001651] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001660] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001668] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001676] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001684] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001692] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001701] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001709] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001716] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001725] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001733] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001741] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001749] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001757] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001765] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001776] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001786] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001795] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001804] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001813] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001821] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001831] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001840] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001849] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001858] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001866] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001875] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001883] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001891] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001900] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001908] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001916] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001935] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001943] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001951] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001959] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 [2024-12-15 13:35:50.001967] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6fc90 is same with the state(5) to be set 00:21:44.476 13:35:50 -- host/failover.sh@45 -- # sleep 3 00:21:47.785 13:35:53 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:47.785 00:21:47.785 13:35:53 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:48.044 [2024-12-15 13:35:53.574098] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574167] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574198] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574207] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574215] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574224] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574233] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574241] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574249] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574258] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574266] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574274] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574282] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574290] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574297] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574305] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574314] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574321] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574329] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574337] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574345] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574359] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574366] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574375] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574383] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574391] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574399] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574407] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574415] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574423] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574431] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574443] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574450] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574467] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574474] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574482] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574489] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574497] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574504] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574512] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574519] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574528] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.044 [2024-12-15 13:35:53.574536] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.045 [2024-12-15 13:35:53.574544] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.045 [2024-12-15 13:35:53.574552] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.045 [2024-12-15 13:35:53.574559] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.045 [2024-12-15 13:35:53.574567] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.045 [2024-12-15 13:35:53.574575] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.045 [2024-12-15 13:35:53.574582] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.045 [2024-12-15 13:35:53.574590] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.045 [2024-12-15 13:35:53.574646] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.045 [2024-12-15 13:35:53.574659] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.045 [2024-12-15 13:35:53.574669] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.045 [2024-12-15 13:35:53.574677] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.045 [2024-12-15 13:35:53.574686] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.045 [2024-12-15 13:35:53.574695] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.045 [2024-12-15 13:35:53.574704] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.045 [2024-12-15 13:35:53.574713] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.045 [2024-12-15 13:35:53.574721] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.045 [2024-12-15 13:35:53.574730] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.045 [2024-12-15 13:35:53.574740] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.045 [2024-12-15 13:35:53.574749] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.045 [2024-12-15 13:35:53.574758] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.045 [2024-12-15 13:35:53.574767] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.045 [2024-12-15 13:35:53.574776] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.045 [2024-12-15 13:35:53.574784] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71380 is same with the state(5) to be set 00:21:48.045 13:35:53 -- host/failover.sh@50 -- # sleep 3 00:21:51.331 13:35:56 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:51.331 [2024-12-15 13:35:56.847202] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:51.331 13:35:56 -- host/failover.sh@55 -- # sleep 1 00:21:52.267 13:35:57 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:52.527 [2024-12-15 13:35:58.074712] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.527 [2024-12-15 13:35:58.074759] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.527 [2024-12-15 13:35:58.074771] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.527 [2024-12-15 13:35:58.074780] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.527 [2024-12-15 13:35:58.074789] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.527 [2024-12-15 13:35:58.074798] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.527 [2024-12-15 13:35:58.074808] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.527 [2024-12-15 13:35:58.074816] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.527 [2024-12-15 13:35:58.074824] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.527 [2024-12-15 13:35:58.074833] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.527 [2024-12-15 13:35:58.074841] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.527 [2024-12-15 13:35:58.074850] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.074859] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.074868] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.074876] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.074884] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.074892] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.074902] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.074910] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.074918] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.074942] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.074951] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.074959] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.074968] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.074975] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.074998] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.075024] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.075032] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.075040] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.075048] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.075056] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.075063] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.075071] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.075079] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.075086] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.075093] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.075101] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.075108] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.075116] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.075123] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.075131] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.075139] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.075147] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.075154] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.075161] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.075169] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.075177] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.075186] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.075194] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.075204] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.075212] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.075221] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.075229] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.075238] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.075247] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.075255] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.075262] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 [2024-12-15 13:35:58.075270] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a71a60 is same with the state(5) to be set 00:21:52.528 13:35:58 -- host/failover.sh@59 -- # wait 95593 00:21:59.101 0 00:21:59.101 13:36:03 -- host/failover.sh@61 -- # killprocess 95551 00:21:59.101 13:36:03 -- common/autotest_common.sh@936 -- # '[' -z 95551 ']' 00:21:59.101 13:36:03 -- common/autotest_common.sh@940 -- # kill -0 95551 00:21:59.101 13:36:03 -- common/autotest_common.sh@941 -- # uname 00:21:59.101 13:36:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:59.101 13:36:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95551 00:21:59.101 killing process with pid 95551 00:21:59.101 13:36:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:59.101 13:36:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:59.101 13:36:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95551' 00:21:59.101 13:36:03 -- common/autotest_common.sh@955 -- # kill 95551 00:21:59.101 13:36:03 -- common/autotest_common.sh@960 -- # wait 95551 00:21:59.101 13:36:04 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:59.101 [2024-12-15 13:35:47.154489] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:59.101 [2024-12-15 13:35:47.154579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95551 ] 00:21:59.101 [2024-12-15 13:35:47.292529] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.101 [2024-12-15 13:35:47.398301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.101 Running I/O for 15 seconds... 00:21:59.101 [2024-12-15 13:35:50.002323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.101 [2024-12-15 13:35:50.002390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.101 [2024-12-15 13:35:50.002423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.101 [2024-12-15 13:35:50.002441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.101 [2024-12-15 13:35:50.002458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.101 [2024-12-15 13:35:50.002474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.101 [2024-12-15 13:35:50.002490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.101 [2024-12-15 13:35:50.002505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.101 [2024-12-15 13:35:50.002522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.101 [2024-12-15 13:35:50.002537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.101 [2024-12-15 13:35:50.002554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.101 [2024-12-15 13:35:50.002569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.101 [2024-12-15 13:35:50.002589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.101 [2024-12-15 13:35:50.002605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.101 [2024-12-15 13:35:50.002621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.101 [2024-12-15 13:35:50.002635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.101 [2024-12-15 13:35:50.002650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.101 [2024-12-15 13:35:50.002665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.101 [2024-12-15 13:35:50.002681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.101 [2024-12-15 13:35:50.002696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.101 [2024-12-15 13:35:50.002712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.101 [2024-12-15 13:35:50.002745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.101 [2024-12-15 13:35:50.002798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.101 [2024-12-15 13:35:50.002815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.101 [2024-12-15 13:35:50.002838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.101 [2024-12-15 13:35:50.002853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.101 [2024-12-15 13:35:50.002869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.101 [2024-12-15 13:35:50.002883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.101 [2024-12-15 13:35:50.002899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.102 [2024-12-15 13:35:50.002913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.002929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.102 [2024-12-15 13:35:50.002946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.002990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.102 [2024-12-15 13:35:50.003019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.003047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.102 [2024-12-15 13:35:50.003060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.003074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.102 [2024-12-15 13:35:50.003086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.003114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.102 [2024-12-15 13:35:50.003126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.003139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.102 [2024-12-15 13:35:50.003151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.003164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.102 [2024-12-15 13:35:50.003176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.003190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.102 [2024-12-15 13:35:50.003201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.003215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.102 [2024-12-15 13:35:50.003235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.003250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.102 [2024-12-15 13:35:50.003263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.003276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.102 [2024-12-15 13:35:50.003288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.003302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.102 [2024-12-15 13:35:50.003314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.003328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.102 [2024-12-15 13:35:50.003340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.003359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.102 [2024-12-15 13:35:50.003372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.003385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.102 [2024-12-15 13:35:50.003397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.003410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.102 [2024-12-15 13:35:50.003423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.003437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.102 [2024-12-15 13:35:50.003449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.003462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.102 [2024-12-15 13:35:50.003474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.003488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.102 [2024-12-15 13:35:50.003501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.003515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.102 [2024-12-15 13:35:50.003526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.003540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.102 [2024-12-15 13:35:50.003552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.003566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.102 [2024-12-15 13:35:50.003586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.003600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.102 [2024-12-15 13:35:50.003629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.003644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.102 [2024-12-15 13:35:50.003657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.003683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.102 [2024-12-15 13:35:50.003698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.003712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.102 [2024-12-15 13:35:50.003725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.003739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.102 [2024-12-15 13:35:50.003751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.003765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.102 [2024-12-15 13:35:50.003778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.003792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.102 [2024-12-15 13:35:50.003804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.003824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.102 [2024-12-15 13:35:50.003837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.003850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.102 [2024-12-15 13:35:50.003863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.003876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.102 [2024-12-15 13:35:50.003889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.003903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.102 [2024-12-15 13:35:50.003915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.102 [2024-12-15 13:35:50.003928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.102 [2024-12-15 13:35:50.003941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.003963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.103 [2024-12-15 13:35:50.003991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.103 [2024-12-15 13:35:50.004024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.103 [2024-12-15 13:35:50.004049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.103 [2024-12-15 13:35:50.004075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.103 [2024-12-15 13:35:50.004100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.103 [2024-12-15 13:35:50.004127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.103 [2024-12-15 13:35:50.004152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.103 [2024-12-15 13:35:50.004178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.103 [2024-12-15 13:35:50.004204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.103 [2024-12-15 13:35:50.004229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.103 [2024-12-15 13:35:50.004256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.103 [2024-12-15 13:35:50.004288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.103 [2024-12-15 13:35:50.004320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.103 [2024-12-15 13:35:50.004347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.103 [2024-12-15 13:35:50.004372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.103 [2024-12-15 13:35:50.004398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.103 [2024-12-15 13:35:50.004423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.103 [2024-12-15 13:35:50.004455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.103 [2024-12-15 13:35:50.004481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.103 [2024-12-15 13:35:50.004506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.103 [2024-12-15 13:35:50.004531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.103 [2024-12-15 13:35:50.004557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.103 [2024-12-15 13:35:50.004583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.103 [2024-12-15 13:35:50.004621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.103 [2024-12-15 13:35:50.004647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.103 [2024-12-15 13:35:50.004679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.103 [2024-12-15 13:35:50.004707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.103 [2024-12-15 13:35:50.004739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.103 [2024-12-15 13:35:50.004764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.103 [2024-12-15 13:35:50.004790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.103 [2024-12-15 13:35:50.004815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.103 [2024-12-15 13:35:50.004840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.103 [2024-12-15 13:35:50.004866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.103 [2024-12-15 13:35:50.004897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.103 [2024-12-15 13:35:50.004910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.103 [2024-12-15 13:35:50.004922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.004936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.104 [2024-12-15 13:35:50.004948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.004961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.104 [2024-12-15 13:35:50.004973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.004986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.104 [2024-12-15 13:35:50.004998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.005018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.104 [2024-12-15 13:35:50.005031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.005044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.104 [2024-12-15 13:35:50.005056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.005069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.104 [2024-12-15 13:35:50.005081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.005094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.104 [2024-12-15 13:35:50.005107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.005120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.104 [2024-12-15 13:35:50.005132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.005151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.104 [2024-12-15 13:35:50.005164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.005177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.104 [2024-12-15 13:35:50.005189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.005203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.104 [2024-12-15 13:35:50.005214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.005227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.104 [2024-12-15 13:35:50.005239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.005252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.104 [2024-12-15 13:35:50.005265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.005278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.104 [2024-12-15 13:35:50.005290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.005309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.104 [2024-12-15 13:35:50.005321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.005335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.104 [2024-12-15 13:35:50.005353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.005367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.104 [2024-12-15 13:35:50.005379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.005392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.104 [2024-12-15 13:35:50.005404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.005417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.104 [2024-12-15 13:35:50.005429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.005442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.104 [2024-12-15 13:35:50.005454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.005468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.104 [2024-12-15 13:35:50.005480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.005493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.104 [2024-12-15 13:35:50.005505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.005518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.104 [2024-12-15 13:35:50.005573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.005590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.104 [2024-12-15 13:35:50.005603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.005645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.104 [2024-12-15 13:35:50.005661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.005676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.104 [2024-12-15 13:35:50.005690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.005704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.104 [2024-12-15 13:35:50.005717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.005731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.104 [2024-12-15 13:35:50.005744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.005766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.104 [2024-12-15 13:35:50.005780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.005794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.104 [2024-12-15 13:35:50.005807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.005827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.104 [2024-12-15 13:35:50.005841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.005855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.104 [2024-12-15 13:35:50.005882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.005896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.104 [2024-12-15 13:35:50.005908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.005922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.104 [2024-12-15 13:35:50.005934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.104 [2024-12-15 13:35:50.005949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.105 [2024-12-15 13:35:50.005961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.105 [2024-12-15 13:35:50.005989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.105 [2024-12-15 13:35:50.006000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.105 [2024-12-15 13:35:50.006014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.105 [2024-12-15 13:35:50.006026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.105 [2024-12-15 13:35:50.006040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.105 [2024-12-15 13:35:50.006066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.105 [2024-12-15 13:35:50.006080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.105 [2024-12-15 13:35:50.006092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.105 [2024-12-15 13:35:50.006106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.105 [2024-12-15 13:35:50.006118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.105 [2024-12-15 13:35:50.006138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.105 [2024-12-15 13:35:50.006151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.105 [2024-12-15 13:35:50.006171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.105 [2024-12-15 13:35:50.006185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.105 [2024-12-15 13:35:50.006199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.105 [2024-12-15 13:35:50.006211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.105 [2024-12-15 13:35:50.006224] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59a130 is same with the state(5) to be set 00:21:59.105 [2024-12-15 13:35:50.006239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:59.105 [2024-12-15 13:35:50.006248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.105 [2024-12-15 13:35:50.006258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5424 len:8 PRP1 0x0 PRP2 0x0 00:21:59.105 [2024-12-15 13:35:50.006269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.105 [2024-12-15 13:35:50.006338] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x59a130 was disconnected and freed. reset controller. 00:21:59.105 [2024-12-15 13:35:50.006361] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:59.105 [2024-12-15 13:35:50.006420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.105 [2024-12-15 13:35:50.006441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.105 [2024-12-15 13:35:50.006455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.105 [2024-12-15 13:35:50.006467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.105 [2024-12-15 13:35:50.006480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.105 [2024-12-15 13:35:50.006493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.105 [2024-12-15 13:35:50.006506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.105 [2024-12-15 13:35:50.006518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.105 [2024-12-15 13:35:50.006531] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:59.105 [2024-12-15 13:35:50.008669] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:59.105 [2024-12-15 13:35:50.008705] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x515cb0 (9): Bad file descriptor 00:21:59.105 [2024-12-15 13:35:50.041521] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:59.105 [2024-12-15 13:35:53.574917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.105 [2024-12-15 13:35:53.574998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.105 [2024-12-15 13:35:53.575050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:35712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.105 [2024-12-15 13:35:53.575067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.105 [2024-12-15 13:35:53.575114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:35720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.105 [2024-12-15 13:35:53.575140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.105 [2024-12-15 13:35:53.575156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:35728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.105 [2024-12-15 13:35:53.575171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.105 [2024-12-15 13:35:53.575185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:35736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.105 [2024-12-15 13:35:53.575199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.105 [2024-12-15 13:35:53.575215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:35744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.105 [2024-12-15 13:35:53.575228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.105 [2024-12-15 13:35:53.575243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.105 [2024-12-15 13:35:53.575271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.105 [2024-12-15 13:35:53.575303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:35776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.105 [2024-12-15 13:35:53.575316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.105 [2024-12-15 13:35:53.575331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:35784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.105 [2024-12-15 13:35:53.575345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.105 [2024-12-15 13:35:53.575360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:35792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.105 [2024-12-15 13:35:53.575374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.105 [2024-12-15 13:35:53.575389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:35808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.105 [2024-12-15 13:35:53.575402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.105 [2024-12-15 13:35:53.575432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:35816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.105 [2024-12-15 13:35:53.575470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.105 [2024-12-15 13:35:53.575485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:35848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.105 [2024-12-15 13:35:53.575497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.105 [2024-12-15 13:35:53.575511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:35856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.105 [2024-12-15 13:35:53.575539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.105 [2024-12-15 13:35:53.575554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:35880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.105 [2024-12-15 13:35:53.575576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.105 [2024-12-15 13:35:53.575591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:35912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.105 [2024-12-15 13:35:53.575621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.105 [2024-12-15 13:35:53.575644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:35920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.105 [2024-12-15 13:35:53.575660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.105 [2024-12-15 13:35:53.575676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:35936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.106 [2024-12-15 13:35:53.575690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.106 [2024-12-15 13:35:53.575748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.106 [2024-12-15 13:35:53.575765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.106 [2024-12-15 13:35:53.575781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:35256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.106 [2024-12-15 13:35:53.575795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.106 [2024-12-15 13:35:53.575812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.106 [2024-12-15 13:35:53.575840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.106 [2024-12-15 13:35:53.575871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:35272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.106 [2024-12-15 13:35:53.575886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.106 [2024-12-15 13:35:53.575901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:35296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.106 [2024-12-15 13:35:53.575915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.106 [2024-12-15 13:35:53.575931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:35304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.106 [2024-12-15 13:35:53.575945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.106 [2024-12-15 13:35:53.575960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:35312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.106 [2024-12-15 13:35:53.575997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.106 [2024-12-15 13:35:53.576037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:35352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.106 [2024-12-15 13:35:53.576051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.106 [2024-12-15 13:35:53.576066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:35384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.106 [2024-12-15 13:35:53.576079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.106 [2024-12-15 13:35:53.576144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.106 [2024-12-15 13:35:53.576158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.106 [2024-12-15 13:35:53.576172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.106 [2024-12-15 13:35:53.576185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.106 [2024-12-15 13:35:53.576199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.106 [2024-12-15 13:35:53.576212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.106 [2024-12-15 13:35:53.576226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:35432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.106 [2024-12-15 13:35:53.576239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.106 [2024-12-15 13:35:53.576254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.106 [2024-12-15 13:35:53.576278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.106 [2024-12-15 13:35:53.576293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:35448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.106 [2024-12-15 13:35:53.576327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.106 [2024-12-15 13:35:53.576344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:35464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.106 [2024-12-15 13:35:53.576370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.106 [2024-12-15 13:35:53.576384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:35944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.106 [2024-12-15 13:35:53.576397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.106 [2024-12-15 13:35:53.576410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.106 [2024-12-15 13:35:53.576423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.106 [2024-12-15 13:35:53.576437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.106 [2024-12-15 13:35:53.576450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.106 [2024-12-15 13:35:53.576464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:35984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.106 [2024-12-15 13:35:53.576476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.106 [2024-12-15 13:35:53.576491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.106 [2024-12-15 13:35:53.576503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.106 [2024-12-15 13:35:53.576516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:36000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.106 [2024-12-15 13:35:53.576551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.106 [2024-12-15 13:35:53.576566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:36008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.106 [2024-12-15 13:35:53.576580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.106 [2024-12-15 13:35:53.576593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:36016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.106 [2024-12-15 13:35:53.576622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.106 [2024-12-15 13:35:53.576637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:36024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.106 [2024-12-15 13:35:53.576665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.106 [2024-12-15 13:35:53.576696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.106 [2024-12-15 13:35:53.576710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.106 [2024-12-15 13:35:53.576726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.106 [2024-12-15 13:35:53.576754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.106 [2024-12-15 13:35:53.576769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.106 [2024-12-15 13:35:53.576798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.106 [2024-12-15 13:35:53.576815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:36056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.107 [2024-12-15 13:35:53.576844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.576861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:36064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.107 [2024-12-15 13:35:53.576876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.576892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:36072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.107 [2024-12-15 13:35:53.576911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.576928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:36080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.107 [2024-12-15 13:35:53.576942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.576958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:35480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.107 [2024-12-15 13:35:53.576972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.577001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:35488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.107 [2024-12-15 13:35:53.577029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.577043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.107 [2024-12-15 13:35:53.577079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.577094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.107 [2024-12-15 13:35:53.577107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.577122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:35528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.107 [2024-12-15 13:35:53.577136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.577150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:35536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.107 [2024-12-15 13:35:53.577163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.577177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:35544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.107 [2024-12-15 13:35:53.577205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.577219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:35560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.107 [2024-12-15 13:35:53.577248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.577262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.107 [2024-12-15 13:35:53.577275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.577289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:36096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.107 [2024-12-15 13:35:53.577302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.577317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:36104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.107 [2024-12-15 13:35:53.577329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.577343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.107 [2024-12-15 13:35:53.577356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.577371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:36120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.107 [2024-12-15 13:35:53.577384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.577399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:36128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.107 [2024-12-15 13:35:53.577411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.577425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.107 [2024-12-15 13:35:53.577451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.577488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.107 [2024-12-15 13:35:53.577502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.577516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:36152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.107 [2024-12-15 13:35:53.577580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.577599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:36160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.107 [2024-12-15 13:35:53.577623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.577643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.107 [2024-12-15 13:35:53.577657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.577673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:36176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.107 [2024-12-15 13:35:53.577687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.577703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:36184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.107 [2024-12-15 13:35:53.577717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.577732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:36192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.107 [2024-12-15 13:35:53.577746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.577761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:36200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.107 [2024-12-15 13:35:53.577775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.577791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:36208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.107 [2024-12-15 13:35:53.577804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.577819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.107 [2024-12-15 13:35:53.577833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.577849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:36224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.107 [2024-12-15 13:35:53.577863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.577885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:36232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.107 [2024-12-15 13:35:53.577898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.577914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:36240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.107 [2024-12-15 13:35:53.577966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.577997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:35568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.107 [2024-12-15 13:35:53.578024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.578038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:35608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.107 [2024-12-15 13:35:53.578050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.578064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.107 [2024-12-15 13:35:53.578090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.107 [2024-12-15 13:35:53.578105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.108 [2024-12-15 13:35:53.578118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.578147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.108 [2024-12-15 13:35:53.578160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.578175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:35656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.108 [2024-12-15 13:35:53.578187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.578202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:35664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.108 [2024-12-15 13:35:53.578231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.578246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:35680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.108 [2024-12-15 13:35:53.578259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.578275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:36248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.108 [2024-12-15 13:35:53.578288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.578303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.108 [2024-12-15 13:35:53.578316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.578342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:36264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.108 [2024-12-15 13:35:53.578355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.578381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:35688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.108 [2024-12-15 13:35:53.578407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.578428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.108 [2024-12-15 13:35:53.578453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.578469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.108 [2024-12-15 13:35:53.578482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.578497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.108 [2024-12-15 13:35:53.578510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.578525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:35800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.108 [2024-12-15 13:35:53.578538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.578553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:35824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.108 [2024-12-15 13:35:53.578581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.578609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:35832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.108 [2024-12-15 13:35:53.578637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.578653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:35840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.108 [2024-12-15 13:35:53.578667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.578682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:36272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.108 [2024-12-15 13:35:53.578696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.578710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:36280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.108 [2024-12-15 13:35:53.578723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.578738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:36288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.108 [2024-12-15 13:35:53.578760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.578777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:36296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.108 [2024-12-15 13:35:53.578805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.578821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:36304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.108 [2024-12-15 13:35:53.578833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.578864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.108 [2024-12-15 13:35:53.578877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.578899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:36320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.108 [2024-12-15 13:35:53.578913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.578927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.108 [2024-12-15 13:35:53.578941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.578955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:36336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.108 [2024-12-15 13:35:53.578968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.578983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:36344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.108 [2024-12-15 13:35:53.578997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.579011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:36352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.108 [2024-12-15 13:35:53.579024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.579039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:36360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.108 [2024-12-15 13:35:53.579052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.579067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:36368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.108 [2024-12-15 13:35:53.579094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.579109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:36376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.108 [2024-12-15 13:35:53.579122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.579152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.108 [2024-12-15 13:35:53.579180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.579194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.108 [2024-12-15 13:35:53.579210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.579224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:36400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.108 [2024-12-15 13:35:53.579237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.579251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:36408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.108 [2024-12-15 13:35:53.579264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.579277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.108 [2024-12-15 13:35:53.579296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.108 [2024-12-15 13:35:53.579311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:36424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.109 [2024-12-15 13:35:53.579324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.109 [2024-12-15 13:35:53.579337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:36432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.109 [2024-12-15 13:35:53.579350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.109 [2024-12-15 13:35:53.579380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.109 [2024-12-15 13:35:53.579393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.109 [2024-12-15 13:35:53.579408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:36448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.109 [2024-12-15 13:35:53.579421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.109 [2024-12-15 13:35:53.579435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.109 [2024-12-15 13:35:53.579448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.109 [2024-12-15 13:35:53.579462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.109 [2024-12-15 13:35:53.579475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.109 [2024-12-15 13:35:53.579489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.109 [2024-12-15 13:35:53.579502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.109 [2024-12-15 13:35:53.579517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:35896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.109 [2024-12-15 13:35:53.579529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.109 [2024-12-15 13:35:53.579544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:35904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.109 [2024-12-15 13:35:53.579557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.109 [2024-12-15 13:35:53.579572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:35928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.109 [2024-12-15 13:35:53.579630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.109 [2024-12-15 13:35:53.579646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:35960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.109 [2024-12-15 13:35:53.579675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.109 [2024-12-15 13:35:53.579721] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x574b10 is same with the state(5) to be set 00:21:59.109 [2024-12-15 13:35:53.579738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:59.109 [2024-12-15 13:35:53.579756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.109 [2024-12-15 13:35:53.579768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35968 len:8 PRP1 0x0 PRP2 0x0 00:21:59.109 [2024-12-15 13:35:53.579782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.109 [2024-12-15 13:35:53.579850] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x574b10 was disconnected and freed. reset controller. 00:21:59.109 [2024-12-15 13:35:53.579869] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:59.109 [2024-12-15 13:35:53.579930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.109 [2024-12-15 13:35:53.579953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.109 [2024-12-15 13:35:53.579968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.109 [2024-12-15 13:35:53.579982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.109 [2024-12-15 13:35:53.579996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.109 [2024-12-15 13:35:53.580010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.109 [2024-12-15 13:35:53.580024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.109 [2024-12-15 13:35:53.580037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.109 [2024-12-15 13:35:53.580079] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:59.109 [2024-12-15 13:35:53.582829] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:59.109 [2024-12-15 13:35:53.582873] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x515cb0 (9): Bad file descriptor 00:21:59.109 [2024-12-15 13:35:53.619898] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:59.109 [2024-12-15 13:35:58.073416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.109 [2024-12-15 13:35:58.073519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.109 [2024-12-15 13:35:58.073621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.109 [2024-12-15 13:35:58.073640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.109 [2024-12-15 13:35:58.073655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.109 [2024-12-15 13:35:58.073669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.109 [2024-12-15 13:35:58.073684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.109 [2024-12-15 13:35:58.073699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.109 [2024-12-15 13:35:58.073713] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x515cb0 is same with the state(5) to be set 00:21:59.109 [2024-12-15 13:35:58.075386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.109 [2024-12-15 13:35:58.075447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.109 [2024-12-15 13:35:58.075478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.109 [2024-12-15 13:35:58.075494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.109 [2024-12-15 13:35:58.075510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:47736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.109 [2024-12-15 13:35:58.075525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.109 [2024-12-15 13:35:58.075540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.109 [2024-12-15 13:35:58.075554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.109 [2024-12-15 13:35:58.075569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.109 [2024-12-15 13:35:58.075583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.109 [2024-12-15 13:35:58.075625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:47760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.109 [2024-12-15 13:35:58.075664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.109 [2024-12-15 13:35:58.075682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:47800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.109 [2024-12-15 13:35:58.075696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.109 [2024-12-15 13:35:58.075712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.109 [2024-12-15 13:35:58.075726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.109 [2024-12-15 13:35:58.075742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:47824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.109 [2024-12-15 13:35:58.075756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.109 [2024-12-15 13:35:58.075771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:47840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.109 [2024-12-15 13:35:58.075785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.109 [2024-12-15 13:35:58.075801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:48336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.109 [2024-12-15 13:35:58.075815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.075831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:48352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.110 [2024-12-15 13:35:58.075845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.075860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:48384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.110 [2024-12-15 13:35:58.075873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.075889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.110 [2024-12-15 13:35:58.075912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.075929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:48400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.110 [2024-12-15 13:35:58.075943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.075958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:48408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.110 [2024-12-15 13:35:58.075972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.075988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:48416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.110 [2024-12-15 13:35:58.076018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.076048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.110 [2024-12-15 13:35:58.076061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.076076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:48496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.110 [2024-12-15 13:35:58.076090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.076105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:47848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.110 [2024-12-15 13:35:58.076118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.076133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.110 [2024-12-15 13:35:58.076146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.076162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:47880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.110 [2024-12-15 13:35:58.076175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.076190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:47888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.110 [2024-12-15 13:35:58.076203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.076218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:47920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.110 [2024-12-15 13:35:58.076231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.076246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:47936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.110 [2024-12-15 13:35:58.076260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.076275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.110 [2024-12-15 13:35:58.076288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.076314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:47960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.110 [2024-12-15 13:35:58.076329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.076346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:48512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.110 [2024-12-15 13:35:58.076359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.076374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:48520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.110 [2024-12-15 13:35:58.076388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.076403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.110 [2024-12-15 13:35:58.076416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.076431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.110 [2024-12-15 13:35:58.076444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.076458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:48544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.110 [2024-12-15 13:35:58.076471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.076486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.110 [2024-12-15 13:35:58.076500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.076515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.110 [2024-12-15 13:35:58.076528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.076543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:48568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.110 [2024-12-15 13:35:58.076556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.076571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:48576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.110 [2024-12-15 13:35:58.076584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.076598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:48584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.110 [2024-12-15 13:35:58.076611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.076670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.110 [2024-12-15 13:35:58.076688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.076704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:48600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.110 [2024-12-15 13:35:58.076727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.076743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:48608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.110 [2024-12-15 13:35:58.076757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.076773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.110 [2024-12-15 13:35:58.076787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.076803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:48624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.110 [2024-12-15 13:35:58.076817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.076832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:48632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.110 [2024-12-15 13:35:58.076846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.076862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:48640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.110 [2024-12-15 13:35:58.076876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.076892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:48648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.110 [2024-12-15 13:35:58.076905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.110 [2024-12-15 13:35:58.076921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.110 [2024-12-15 13:35:58.076935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.111 [2024-12-15 13:35:58.076951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:48664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.111 [2024-12-15 13:35:58.076964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.111 [2024-12-15 13:35:58.076980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.111 [2024-12-15 13:35:58.076994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.111 [2024-12-15 13:35:58.077010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.111 [2024-12-15 13:35:58.077038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.111 [2024-12-15 13:35:58.077067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.111 [2024-12-15 13:35:58.077081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.111 [2024-12-15 13:35:58.077097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:48048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.111 [2024-12-15 13:35:58.077110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.111 [2024-12-15 13:35:58.077132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.111 [2024-12-15 13:35:58.077146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.111 [2024-12-15 13:35:58.077160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:48072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.111 [2024-12-15 13:35:58.077173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.111 [2024-12-15 13:35:58.077188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:48088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.111 [2024-12-15 13:35:58.077201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.111 [2024-12-15 13:35:58.077216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:48112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.111 [2024-12-15 13:35:58.077229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.111 [2024-12-15 13:35:58.077244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.111 [2024-12-15 13:35:58.077257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.111 [2024-12-15 13:35:58.077272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:48680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.111 [2024-12-15 13:35:58.077285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.111 [2024-12-15 13:35:58.077300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:48688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.111 [2024-12-15 13:35:58.077314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.111 [2024-12-15 13:35:58.077328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.111 [2024-12-15 13:35:58.077342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.111 [2024-12-15 13:35:58.077356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:48704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.111 [2024-12-15 13:35:58.077369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.111 [2024-12-15 13:35:58.077384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:48712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.111 [2024-12-15 13:35:58.077397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.111 [2024-12-15 13:35:58.077412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.111 [2024-12-15 13:35:58.077426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.111 [2024-12-15 13:35:58.077440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.111 [2024-12-15 13:35:58.077453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.111 [2024-12-15 13:35:58.077468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:48736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.111 [2024-12-15 13:35:58.077487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.111 [2024-12-15 13:35:58.077502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:48744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.111 [2024-12-15 13:35:58.077516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.111 [2024-12-15 13:35:58.077559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.111 [2024-12-15 13:35:58.077576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.111 [2024-12-15 13:35:58.077592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:48176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.111 [2024-12-15 13:35:58.077616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.111 [2024-12-15 13:35:58.077634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.111 [2024-12-15 13:35:58.077649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.111 [2024-12-15 13:35:58.077664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:48224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.111 [2024-12-15 13:35:58.077678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.111 [2024-12-15 13:35:58.077694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:48232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.111 [2024-12-15 13:35:58.077707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.111 [2024-12-15 13:35:58.077723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:48240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.111 [2024-12-15 13:35:58.077742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.111 [2024-12-15 13:35:58.077758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:48264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.111 [2024-12-15 13:35:58.077772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.111 [2024-12-15 13:35:58.077788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:48272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.111 [2024-12-15 13:35:58.077802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.111 [2024-12-15 13:35:58.077818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:48280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.111 [2024-12-15 13:35:58.077831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.111 [2024-12-15 13:35:58.077847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.111 [2024-12-15 13:35:58.077861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.111 [2024-12-15 13:35:58.077876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.111 [2024-12-15 13:35:58.077890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.111 [2024-12-15 13:35:58.077906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:48776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.112 [2024-12-15 13:35:58.077929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.077945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:48784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.112 [2024-12-15 13:35:58.077959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.078003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:48792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.112 [2024-12-15 13:35:58.078031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.078045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:48800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.112 [2024-12-15 13:35:58.078058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.078072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.112 [2024-12-15 13:35:58.078092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.078107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.112 [2024-12-15 13:35:58.078120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.078134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.112 [2024-12-15 13:35:58.078147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.078162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:48832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.112 [2024-12-15 13:35:58.078174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.078188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:48840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.112 [2024-12-15 13:35:58.078201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.078215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.112 [2024-12-15 13:35:58.078228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.078243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.112 [2024-12-15 13:35:58.078256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.078271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:48864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.112 [2024-12-15 13:35:58.078283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.078298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:48872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.112 [2024-12-15 13:35:58.078310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.078331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:48296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.112 [2024-12-15 13:35:58.078344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.078359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.112 [2024-12-15 13:35:58.078371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.078386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:48328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.112 [2024-12-15 13:35:58.078398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.078413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:48344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.112 [2024-12-15 13:35:58.078425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.078439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:48360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.112 [2024-12-15 13:35:58.078452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.078466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:48368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.112 [2024-12-15 13:35:58.078479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.078493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:48376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.112 [2024-12-15 13:35:58.078506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.078520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:48424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.112 [2024-12-15 13:35:58.078538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.078553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.112 [2024-12-15 13:35:58.078565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.078579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:48888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.112 [2024-12-15 13:35:58.078592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.078638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:48896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.112 [2024-12-15 13:35:58.078667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.078694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:48904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.112 [2024-12-15 13:35:58.078709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.078724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.112 [2024-12-15 13:35:58.078747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.078770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:48920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.112 [2024-12-15 13:35:58.078786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.078801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:48928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.112 [2024-12-15 13:35:58.078815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.078831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.112 [2024-12-15 13:35:58.078844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.078860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.112 [2024-12-15 13:35:58.078874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.078890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:48952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.112 [2024-12-15 13:35:58.078903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.078919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:48960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.112 [2024-12-15 13:35:58.078933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.078949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.112 [2024-12-15 13:35:58.078962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.078978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.112 [2024-12-15 13:35:58.078992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.112 [2024-12-15 13:35:58.079022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.112 [2024-12-15 13:35:58.079064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.113 [2024-12-15 13:35:58.079077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:48992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.113 [2024-12-15 13:35:58.079089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.113 [2024-12-15 13:35:58.079103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.113 [2024-12-15 13:35:58.079121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.113 [2024-12-15 13:35:58.079135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.113 [2024-12-15 13:35:58.079148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.113 [2024-12-15 13:35:58.079169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.113 [2024-12-15 13:35:58.079182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.113 [2024-12-15 13:35:58.079196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.113 [2024-12-15 13:35:58.079208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.113 [2024-12-15 13:35:58.079222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:59.113 [2024-12-15 13:35:58.079234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.113 [2024-12-15 13:35:58.079248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:49040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.113 [2024-12-15 13:35:58.079261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.113 [2024-12-15 13:35:58.079278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:49048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.113 [2024-12-15 13:35:58.079291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.113 [2024-12-15 13:35:58.079306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.113 [2024-12-15 13:35:58.079318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.113 [2024-12-15 13:35:58.079331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:48432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.113 [2024-12-15 13:35:58.079344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.113 [2024-12-15 13:35:58.079358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:48440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.113 [2024-12-15 13:35:58.079370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.113 [2024-12-15 13:35:58.079384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:48448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.113 [2024-12-15 13:35:58.079396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.113 [2024-12-15 13:35:58.079411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.113 [2024-12-15 13:35:58.079423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.113 [2024-12-15 13:35:58.079437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:48472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.113 [2024-12-15 13:35:58.079448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.113 [2024-12-15 13:35:58.079462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.113 [2024-12-15 13:35:58.079474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.113 [2024-12-15 13:35:58.079488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:48488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.113 [2024-12-15 13:35:58.079509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.113 [2024-12-15 13:35:58.079523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61c060 is same with the state(5) to be set 00:21:59.113 [2024-12-15 13:35:58.079539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:59.113 [2024-12-15 13:35:58.079549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:59.113 [2024-12-15 13:35:58.079565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48504 len:8 PRP1 0x0 PRP2 0x0 00:21:59.113 [2024-12-15 13:35:58.079577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.113 [2024-12-15 13:35:58.079689] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61c060 was disconnected and freed. reset controller. 00:21:59.113 [2024-12-15 13:35:58.079711] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:59.113 [2024-12-15 13:35:58.079727] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:59.113 [2024-12-15 13:35:58.082324] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:59.113 [2024-12-15 13:35:58.082364] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x515cb0 (9): Bad file descriptor 00:21:59.113 [2024-12-15 13:35:58.110431] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:59.113 00:21:59.113 Latency(us) 00:21:59.113 [2024-12-15T13:36:04.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.113 [2024-12-15T13:36:04.803Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:59.113 Verification LBA range: start 0x0 length 0x4000 00:21:59.113 NVMe0n1 : 15.01 14571.17 56.92 326.08 0.00 8574.61 636.74 16086.11 00:21:59.113 [2024-12-15T13:36:04.803Z] =================================================================================================================== 00:21:59.113 [2024-12-15T13:36:04.803Z] Total : 14571.17 56.92 326.08 0.00 8574.61 636.74 16086.11 00:21:59.113 Received shutdown signal, test time was about 15.000000 seconds 00:21:59.113 00:21:59.113 Latency(us) 00:21:59.113 [2024-12-15T13:36:04.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.113 [2024-12-15T13:36:04.803Z] =================================================================================================================== 00:21:59.113 [2024-12-15T13:36:04.803Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:59.113 13:36:04 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:59.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:59.113 13:36:04 -- host/failover.sh@65 -- # count=3 00:21:59.113 13:36:04 -- host/failover.sh@67 -- # (( count != 3 )) 00:21:59.113 13:36:04 -- host/failover.sh@73 -- # bdevperf_pid=95799 00:21:59.113 13:36:04 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:59.113 13:36:04 -- host/failover.sh@75 -- # waitforlisten 95799 /var/tmp/bdevperf.sock 00:21:59.113 13:36:04 -- common/autotest_common.sh@829 -- # '[' -z 95799 ']' 00:21:59.113 13:36:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:59.113 13:36:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:59.113 13:36:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:59.113 13:36:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:59.113 13:36:04 -- common/autotest_common.sh@10 -- # set +x 00:21:59.680 13:36:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:59.680 13:36:05 -- common/autotest_common.sh@862 -- # return 0 00:21:59.680 13:36:05 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:59.938 [2024-12-15 13:36:05.453027] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:59.938 13:36:05 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:00.196 [2024-12-15 13:36:05.713178] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:00.196 13:36:05 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:00.454 NVMe0n1 00:22:00.454 13:36:06 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:00.713 00:22:00.713 13:36:06 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:00.971 00:22:00.971 13:36:06 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:00.971 13:36:06 -- host/failover.sh@82 -- # grep -q NVMe0 00:22:01.231 13:36:06 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:01.489 13:36:07 -- host/failover.sh@87 -- # sleep 3 00:22:04.782 13:36:10 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:04.782 13:36:10 -- host/failover.sh@88 -- # grep -q NVMe0 00:22:04.782 13:36:10 -- host/failover.sh@90 -- # run_test_pid=95943 00:22:04.782 13:36:10 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:04.782 13:36:10 -- host/failover.sh@92 -- # wait 95943 00:22:06.157 0 00:22:06.157 13:36:11 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:06.157 [2024-12-15 13:36:04.280676] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:06.157 [2024-12-15 13:36:04.280786] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95799 ] 00:22:06.157 [2024-12-15 13:36:04.440961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.157 [2024-12-15 13:36:04.516724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.157 [2024-12-15 13:36:07.131358] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:06.157 [2024-12-15 13:36:07.131468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.157 [2024-12-15 13:36:07.131492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-12-15 13:36:07.131509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.157 [2024-12-15 13:36:07.131523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-12-15 13:36:07.131537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.157 [2024-12-15 13:36:07.131550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-12-15 13:36:07.131574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.157 [2024-12-15 13:36:07.131623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.157 [2024-12-15 13:36:07.131640] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:06.157 [2024-12-15 13:36:07.131692] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:06.157 [2024-12-15 13:36:07.131725] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68bcb0 (9): Bad file descriptor 00:22:06.157 [2024-12-15 13:36:07.141126] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:06.157 Running I/O for 1 seconds... 00:22:06.157 00:22:06.157 Latency(us) 00:22:06.157 [2024-12-15T13:36:11.847Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.157 [2024-12-15T13:36:11.847Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:06.157 Verification LBA range: start 0x0 length 0x4000 00:22:06.157 NVMe0n1 : 1.01 15186.06 59.32 0.00 0.00 8389.60 1288.38 9592.09 00:22:06.157 [2024-12-15T13:36:11.847Z] =================================================================================================================== 00:22:06.157 [2024-12-15T13:36:11.847Z] Total : 15186.06 59.32 0.00 0.00 8389.60 1288.38 9592.09 00:22:06.157 13:36:11 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:06.157 13:36:11 -- host/failover.sh@95 -- # grep -q NVMe0 00:22:06.415 13:36:11 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:06.674 13:36:12 -- host/failover.sh@99 -- # grep -q NVMe0 00:22:06.674 13:36:12 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:06.932 13:36:12 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:06.932 13:36:12 -- host/failover.sh@101 -- # sleep 3 00:22:10.276 13:36:15 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:10.276 13:36:15 -- host/failover.sh@103 -- # grep -q NVMe0 00:22:10.276 13:36:15 -- host/failover.sh@108 -- # killprocess 95799 00:22:10.276 13:36:15 -- common/autotest_common.sh@936 -- # '[' -z 95799 ']' 00:22:10.276 13:36:15 -- common/autotest_common.sh@940 -- # kill -0 95799 00:22:10.276 13:36:15 -- common/autotest_common.sh@941 -- # uname 00:22:10.276 13:36:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:10.276 13:36:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95799 00:22:10.276 killing process with pid 95799 00:22:10.276 13:36:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:10.276 13:36:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:10.276 13:36:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95799' 00:22:10.276 13:36:15 -- common/autotest_common.sh@955 -- # kill 95799 00:22:10.276 13:36:15 -- common/autotest_common.sh@960 -- # wait 95799 00:22:10.534 13:36:16 -- host/failover.sh@110 -- # sync 00:22:10.534 13:36:16 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:10.793 13:36:16 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:10.793 13:36:16 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:10.793 13:36:16 -- host/failover.sh@116 -- # nvmftestfini 00:22:10.793 13:36:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:10.793 13:36:16 -- nvmf/common.sh@116 -- # sync 00:22:10.793 13:36:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:10.793 13:36:16 -- nvmf/common.sh@119 -- # set +e 00:22:10.793 13:36:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:10.793 13:36:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:10.793 rmmod nvme_tcp 00:22:10.793 rmmod nvme_fabrics 00:22:10.793 rmmod nvme_keyring 00:22:10.793 13:36:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:10.793 13:36:16 -- nvmf/common.sh@123 -- # set -e 00:22:10.793 13:36:16 -- nvmf/common.sh@124 -- # return 0 00:22:10.793 13:36:16 -- nvmf/common.sh@477 -- # '[' -n 95434 ']' 00:22:10.793 13:36:16 -- nvmf/common.sh@478 -- # killprocess 95434 00:22:10.793 13:36:16 -- common/autotest_common.sh@936 -- # '[' -z 95434 ']' 00:22:10.793 13:36:16 -- common/autotest_common.sh@940 -- # kill -0 95434 00:22:10.793 13:36:16 -- common/autotest_common.sh@941 -- # uname 00:22:11.051 13:36:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:11.052 13:36:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95434 00:22:11.052 killing process with pid 95434 00:22:11.052 13:36:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:11.052 13:36:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:11.052 13:36:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95434' 00:22:11.052 13:36:16 -- common/autotest_common.sh@955 -- # kill 95434 00:22:11.052 13:36:16 -- common/autotest_common.sh@960 -- # wait 95434 00:22:11.052 13:36:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:11.052 13:36:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:11.052 13:36:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:11.052 13:36:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:11.052 13:36:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:11.052 13:36:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.052 13:36:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:11.052 13:36:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.310 13:36:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:11.310 00:22:11.310 real 0m32.946s 00:22:11.310 user 2m6.657s 00:22:11.310 sys 0m5.668s 00:22:11.310 13:36:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:11.310 13:36:16 -- common/autotest_common.sh@10 -- # set +x 00:22:11.310 ************************************ 00:22:11.310 END TEST nvmf_failover 00:22:11.310 ************************************ 00:22:11.310 13:36:16 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:11.310 13:36:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:11.310 13:36:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:11.310 13:36:16 -- common/autotest_common.sh@10 -- # set +x 00:22:11.310 ************************************ 00:22:11.310 START TEST nvmf_discovery 00:22:11.310 ************************************ 00:22:11.310 13:36:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:11.310 * Looking for test storage... 00:22:11.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:11.310 13:36:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:11.310 13:36:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:11.310 13:36:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:11.310 13:36:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:11.310 13:36:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:11.310 13:36:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:11.310 13:36:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:11.310 13:36:16 -- scripts/common.sh@335 -- # IFS=.-: 00:22:11.310 13:36:16 -- scripts/common.sh@335 -- # read -ra ver1 00:22:11.310 13:36:16 -- scripts/common.sh@336 -- # IFS=.-: 00:22:11.310 13:36:16 -- scripts/common.sh@336 -- # read -ra ver2 00:22:11.310 13:36:16 -- scripts/common.sh@337 -- # local 'op=<' 00:22:11.310 13:36:16 -- scripts/common.sh@339 -- # ver1_l=2 00:22:11.310 13:36:16 -- scripts/common.sh@340 -- # ver2_l=1 00:22:11.310 13:36:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:11.310 13:36:16 -- scripts/common.sh@343 -- # case "$op" in 00:22:11.310 13:36:16 -- scripts/common.sh@344 -- # : 1 00:22:11.310 13:36:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:11.310 13:36:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:11.310 13:36:16 -- scripts/common.sh@364 -- # decimal 1 00:22:11.310 13:36:16 -- scripts/common.sh@352 -- # local d=1 00:22:11.310 13:36:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:11.310 13:36:16 -- scripts/common.sh@354 -- # echo 1 00:22:11.310 13:36:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:11.310 13:36:16 -- scripts/common.sh@365 -- # decimal 2 00:22:11.310 13:36:16 -- scripts/common.sh@352 -- # local d=2 00:22:11.310 13:36:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:11.310 13:36:16 -- scripts/common.sh@354 -- # echo 2 00:22:11.310 13:36:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:11.310 13:36:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:11.310 13:36:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:11.310 13:36:16 -- scripts/common.sh@367 -- # return 0 00:22:11.310 13:36:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:11.310 13:36:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:11.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.310 --rc genhtml_branch_coverage=1 00:22:11.310 --rc genhtml_function_coverage=1 00:22:11.310 --rc genhtml_legend=1 00:22:11.310 --rc geninfo_all_blocks=1 00:22:11.310 --rc geninfo_unexecuted_blocks=1 00:22:11.310 00:22:11.310 ' 00:22:11.310 13:36:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:11.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.310 --rc genhtml_branch_coverage=1 00:22:11.310 --rc genhtml_function_coverage=1 00:22:11.310 --rc genhtml_legend=1 00:22:11.310 --rc geninfo_all_blocks=1 00:22:11.310 --rc geninfo_unexecuted_blocks=1 00:22:11.310 00:22:11.310 ' 00:22:11.310 13:36:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:11.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.310 --rc genhtml_branch_coverage=1 00:22:11.310 --rc genhtml_function_coverage=1 00:22:11.310 --rc genhtml_legend=1 00:22:11.310 --rc geninfo_all_blocks=1 00:22:11.311 --rc geninfo_unexecuted_blocks=1 00:22:11.311 00:22:11.311 ' 00:22:11.311 13:36:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:11.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.311 --rc genhtml_branch_coverage=1 00:22:11.311 --rc genhtml_function_coverage=1 00:22:11.311 --rc genhtml_legend=1 00:22:11.311 --rc geninfo_all_blocks=1 00:22:11.311 --rc geninfo_unexecuted_blocks=1 00:22:11.311 00:22:11.311 ' 00:22:11.311 13:36:16 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:11.570 13:36:16 -- nvmf/common.sh@7 -- # uname -s 00:22:11.570 13:36:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:11.570 13:36:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:11.570 13:36:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:11.570 13:36:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:11.570 13:36:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:11.570 13:36:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:11.570 13:36:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:11.570 13:36:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:11.570 13:36:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:11.570 13:36:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:11.570 13:36:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:22:11.570 13:36:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:22:11.570 13:36:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:11.570 13:36:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:11.570 13:36:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:11.570 13:36:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:11.570 13:36:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.570 13:36:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.570 13:36:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.570 13:36:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.570 13:36:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.570 13:36:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.570 13:36:17 -- paths/export.sh@5 -- # export PATH 00:22:11.570 13:36:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.570 13:36:17 -- nvmf/common.sh@46 -- # : 0 00:22:11.570 13:36:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:11.570 13:36:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:11.570 13:36:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:11.570 13:36:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:11.570 13:36:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:11.570 13:36:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:11.570 13:36:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:11.570 13:36:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:11.570 13:36:17 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:11.570 13:36:17 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:11.570 13:36:17 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:11.570 13:36:17 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:11.570 13:36:17 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:11.570 13:36:17 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:11.570 13:36:17 -- host/discovery.sh@25 -- # nvmftestinit 00:22:11.570 13:36:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:11.570 13:36:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:11.570 13:36:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:11.571 13:36:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:11.571 13:36:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:11.571 13:36:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.571 13:36:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:11.571 13:36:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.571 13:36:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:11.571 13:36:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:11.571 13:36:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:11.571 13:36:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:11.571 13:36:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:11.571 13:36:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:11.571 13:36:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:11.571 13:36:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:11.571 13:36:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:11.571 13:36:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:11.571 13:36:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:11.571 13:36:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:11.571 13:36:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:11.571 13:36:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:11.571 13:36:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:11.571 13:36:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:11.571 13:36:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:11.571 13:36:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:11.571 13:36:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:11.571 13:36:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:11.571 Cannot find device "nvmf_tgt_br" 00:22:11.571 13:36:17 -- nvmf/common.sh@154 -- # true 00:22:11.571 13:36:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:11.571 Cannot find device "nvmf_tgt_br2" 00:22:11.571 13:36:17 -- nvmf/common.sh@155 -- # true 00:22:11.571 13:36:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:11.571 13:36:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:11.571 Cannot find device "nvmf_tgt_br" 00:22:11.571 13:36:17 -- nvmf/common.sh@157 -- # true 00:22:11.571 13:36:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:11.571 Cannot find device "nvmf_tgt_br2" 00:22:11.571 13:36:17 -- nvmf/common.sh@158 -- # true 00:22:11.571 13:36:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:11.571 13:36:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:11.571 13:36:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:11.571 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:11.571 13:36:17 -- nvmf/common.sh@161 -- # true 00:22:11.571 13:36:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:11.571 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:11.571 13:36:17 -- nvmf/common.sh@162 -- # true 00:22:11.571 13:36:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:11.571 13:36:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:11.571 13:36:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:11.571 13:36:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:11.571 13:36:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:11.571 13:36:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:11.571 13:36:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:11.571 13:36:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:11.571 13:36:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:11.571 13:36:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:11.571 13:36:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:11.571 13:36:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:11.571 13:36:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:11.571 13:36:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:11.830 13:36:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:11.830 13:36:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:11.830 13:36:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:11.830 13:36:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:11.830 13:36:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:11.830 13:36:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:11.830 13:36:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:11.830 13:36:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:11.830 13:36:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:11.830 13:36:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:11.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:22:11.830 00:22:11.830 --- 10.0.0.2 ping statistics --- 00:22:11.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.830 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:22:11.830 13:36:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:11.830 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:11.830 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:22:11.830 00:22:11.830 --- 10.0.0.3 ping statistics --- 00:22:11.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.830 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:22:11.830 13:36:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:11.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:22:11.830 00:22:11.830 --- 10.0.0.1 ping statistics --- 00:22:11.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.830 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:22:11.830 13:36:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.830 13:36:17 -- nvmf/common.sh@421 -- # return 0 00:22:11.830 13:36:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:11.830 13:36:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.830 13:36:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:11.830 13:36:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:11.830 13:36:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.830 13:36:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:11.830 13:36:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:11.830 13:36:17 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:11.830 13:36:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:11.830 13:36:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:11.830 13:36:17 -- common/autotest_common.sh@10 -- # set +x 00:22:11.830 13:36:17 -- nvmf/common.sh@469 -- # nvmfpid=96248 00:22:11.830 13:36:17 -- nvmf/common.sh@470 -- # waitforlisten 96248 00:22:11.830 13:36:17 -- common/autotest_common.sh@829 -- # '[' -z 96248 ']' 00:22:11.830 13:36:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.830 13:36:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:11.830 13:36:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:11.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.830 13:36:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.830 13:36:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:11.830 13:36:17 -- common/autotest_common.sh@10 -- # set +x 00:22:11.830 [2024-12-15 13:36:17.426074] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:11.830 [2024-12-15 13:36:17.426154] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:12.089 [2024-12-15 13:36:17.562883] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.089 [2024-12-15 13:36:17.618352] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:12.089 [2024-12-15 13:36:17.618497] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:12.089 [2024-12-15 13:36:17.618510] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:12.089 [2024-12-15 13:36:17.618518] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:12.089 [2024-12-15 13:36:17.618548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:13.029 13:36:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:13.029 13:36:18 -- common/autotest_common.sh@862 -- # return 0 00:22:13.029 13:36:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:13.029 13:36:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:13.029 13:36:18 -- common/autotest_common.sh@10 -- # set +x 00:22:13.029 13:36:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:13.029 13:36:18 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:13.029 13:36:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.029 13:36:18 -- common/autotest_common.sh@10 -- # set +x 00:22:13.029 [2024-12-15 13:36:18.499720] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:13.029 13:36:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.029 13:36:18 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:13.029 13:36:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.029 13:36:18 -- common/autotest_common.sh@10 -- # set +x 00:22:13.029 [2024-12-15 13:36:18.507859] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:13.029 13:36:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.029 13:36:18 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:13.029 13:36:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.029 13:36:18 -- common/autotest_common.sh@10 -- # set +x 00:22:13.029 null0 00:22:13.029 13:36:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.029 13:36:18 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:13.029 13:36:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.029 13:36:18 -- common/autotest_common.sh@10 -- # set +x 00:22:13.029 null1 00:22:13.029 13:36:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.029 13:36:18 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:13.029 13:36:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.029 13:36:18 -- common/autotest_common.sh@10 -- # set +x 00:22:13.029 13:36:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.029 13:36:18 -- host/discovery.sh@45 -- # hostpid=96304 00:22:13.029 13:36:18 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:13.029 13:36:18 -- host/discovery.sh@46 -- # waitforlisten 96304 /tmp/host.sock 00:22:13.029 13:36:18 -- common/autotest_common.sh@829 -- # '[' -z 96304 ']' 00:22:13.029 13:36:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:13.029 13:36:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:13.029 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:13.029 13:36:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:13.029 13:36:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:13.029 13:36:18 -- common/autotest_common.sh@10 -- # set +x 00:22:13.029 [2024-12-15 13:36:18.596163] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:13.029 [2024-12-15 13:36:18.596289] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96304 ] 00:22:13.288 [2024-12-15 13:36:18.738942] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.288 [2024-12-15 13:36:18.806845] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:13.288 [2024-12-15 13:36:18.807033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.857 13:36:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:13.857 13:36:19 -- common/autotest_common.sh@862 -- # return 0 00:22:13.857 13:36:19 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:13.857 13:36:19 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:13.857 13:36:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.857 13:36:19 -- common/autotest_common.sh@10 -- # set +x 00:22:14.116 13:36:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.116 13:36:19 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:14.116 13:36:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.116 13:36:19 -- common/autotest_common.sh@10 -- # set +x 00:22:14.116 13:36:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.116 13:36:19 -- host/discovery.sh@72 -- # notify_id=0 00:22:14.116 13:36:19 -- host/discovery.sh@78 -- # get_subsystem_names 00:22:14.116 13:36:19 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:14.116 13:36:19 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:14.116 13:36:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.116 13:36:19 -- host/discovery.sh@59 -- # sort 00:22:14.116 13:36:19 -- common/autotest_common.sh@10 -- # set +x 00:22:14.116 13:36:19 -- host/discovery.sh@59 -- # xargs 00:22:14.116 13:36:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.116 13:36:19 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:22:14.116 13:36:19 -- host/discovery.sh@79 -- # get_bdev_list 00:22:14.116 13:36:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:14.116 13:36:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.116 13:36:19 -- common/autotest_common.sh@10 -- # set +x 00:22:14.116 13:36:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:14.116 13:36:19 -- host/discovery.sh@55 -- # sort 00:22:14.116 13:36:19 -- host/discovery.sh@55 -- # xargs 00:22:14.116 13:36:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.116 13:36:19 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:22:14.116 13:36:19 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:14.116 13:36:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.116 13:36:19 -- common/autotest_common.sh@10 -- # set +x 00:22:14.116 13:36:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.116 13:36:19 -- host/discovery.sh@82 -- # get_subsystem_names 00:22:14.116 13:36:19 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:14.116 13:36:19 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:14.116 13:36:19 -- host/discovery.sh@59 -- # sort 00:22:14.116 13:36:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.116 13:36:19 -- common/autotest_common.sh@10 -- # set +x 00:22:14.116 13:36:19 -- host/discovery.sh@59 -- # xargs 00:22:14.116 13:36:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.116 13:36:19 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:22:14.116 13:36:19 -- host/discovery.sh@83 -- # get_bdev_list 00:22:14.116 13:36:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:14.117 13:36:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:14.117 13:36:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.117 13:36:19 -- common/autotest_common.sh@10 -- # set +x 00:22:14.117 13:36:19 -- host/discovery.sh@55 -- # sort 00:22:14.117 13:36:19 -- host/discovery.sh@55 -- # xargs 00:22:14.117 13:36:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.117 13:36:19 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:14.117 13:36:19 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:14.117 13:36:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.117 13:36:19 -- common/autotest_common.sh@10 -- # set +x 00:22:14.117 13:36:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.117 13:36:19 -- host/discovery.sh@86 -- # get_subsystem_names 00:22:14.117 13:36:19 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:14.117 13:36:19 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:14.117 13:36:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.117 13:36:19 -- host/discovery.sh@59 -- # xargs 00:22:14.117 13:36:19 -- common/autotest_common.sh@10 -- # set +x 00:22:14.117 13:36:19 -- host/discovery.sh@59 -- # sort 00:22:14.117 13:36:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.376 13:36:19 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:22:14.376 13:36:19 -- host/discovery.sh@87 -- # get_bdev_list 00:22:14.376 13:36:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:14.376 13:36:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:14.376 13:36:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.376 13:36:19 -- host/discovery.sh@55 -- # sort 00:22:14.376 13:36:19 -- host/discovery.sh@55 -- # xargs 00:22:14.376 13:36:19 -- common/autotest_common.sh@10 -- # set +x 00:22:14.376 13:36:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.376 13:36:19 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:14.376 13:36:19 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:14.376 13:36:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.376 13:36:19 -- common/autotest_common.sh@10 -- # set +x 00:22:14.376 [2024-12-15 13:36:19.896212] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:14.376 13:36:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.376 13:36:19 -- host/discovery.sh@92 -- # get_subsystem_names 00:22:14.376 13:36:19 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:14.376 13:36:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.376 13:36:19 -- common/autotest_common.sh@10 -- # set +x 00:22:14.376 13:36:19 -- host/discovery.sh@59 -- # sort 00:22:14.376 13:36:19 -- host/discovery.sh@59 -- # xargs 00:22:14.376 13:36:19 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:14.376 13:36:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.376 13:36:19 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:14.376 13:36:19 -- host/discovery.sh@93 -- # get_bdev_list 00:22:14.376 13:36:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:14.376 13:36:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.376 13:36:19 -- common/autotest_common.sh@10 -- # set +x 00:22:14.376 13:36:19 -- host/discovery.sh@55 -- # xargs 00:22:14.376 13:36:19 -- host/discovery.sh@55 -- # sort 00:22:14.376 13:36:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:14.376 13:36:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.376 13:36:20 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:22:14.376 13:36:20 -- host/discovery.sh@94 -- # get_notification_count 00:22:14.376 13:36:20 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:14.376 13:36:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.376 13:36:20 -- host/discovery.sh@74 -- # jq '. | length' 00:22:14.376 13:36:20 -- common/autotest_common.sh@10 -- # set +x 00:22:14.376 13:36:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.376 13:36:20 -- host/discovery.sh@74 -- # notification_count=0 00:22:14.376 13:36:20 -- host/discovery.sh@75 -- # notify_id=0 00:22:14.376 13:36:20 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:22:14.376 13:36:20 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:14.376 13:36:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.376 13:36:20 -- common/autotest_common.sh@10 -- # set +x 00:22:14.635 13:36:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.635 13:36:20 -- host/discovery.sh@100 -- # sleep 1 00:22:14.894 [2024-12-15 13:36:20.563035] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:14.894 [2024-12-15 13:36:20.563082] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:14.894 [2024-12-15 13:36:20.563099] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:15.153 [2024-12-15 13:36:20.649141] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:15.153 [2024-12-15 13:36:20.704697] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:15.153 [2024-12-15 13:36:20.704748] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:15.412 13:36:21 -- host/discovery.sh@101 -- # get_subsystem_names 00:22:15.412 13:36:21 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:15.412 13:36:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.412 13:36:21 -- common/autotest_common.sh@10 -- # set +x 00:22:15.412 13:36:21 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:15.412 13:36:21 -- host/discovery.sh@59 -- # sort 00:22:15.412 13:36:21 -- host/discovery.sh@59 -- # xargs 00:22:15.412 13:36:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.671 13:36:21 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.671 13:36:21 -- host/discovery.sh@102 -- # get_bdev_list 00:22:15.671 13:36:21 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:15.671 13:36:21 -- host/discovery.sh@55 -- # sort 00:22:15.671 13:36:21 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:15.671 13:36:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.671 13:36:21 -- common/autotest_common.sh@10 -- # set +x 00:22:15.671 13:36:21 -- host/discovery.sh@55 -- # xargs 00:22:15.671 13:36:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.671 13:36:21 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:15.671 13:36:21 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:22:15.671 13:36:21 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:15.671 13:36:21 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:15.671 13:36:21 -- host/discovery.sh@63 -- # sort -n 00:22:15.671 13:36:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.671 13:36:21 -- common/autotest_common.sh@10 -- # set +x 00:22:15.671 13:36:21 -- host/discovery.sh@63 -- # xargs 00:22:15.671 13:36:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.671 13:36:21 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:22:15.671 13:36:21 -- host/discovery.sh@104 -- # get_notification_count 00:22:15.671 13:36:21 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:15.671 13:36:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.671 13:36:21 -- common/autotest_common.sh@10 -- # set +x 00:22:15.671 13:36:21 -- host/discovery.sh@74 -- # jq '. | length' 00:22:15.671 13:36:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.671 13:36:21 -- host/discovery.sh@74 -- # notification_count=1 00:22:15.671 13:36:21 -- host/discovery.sh@75 -- # notify_id=1 00:22:15.671 13:36:21 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:22:15.671 13:36:21 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:15.671 13:36:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.671 13:36:21 -- common/autotest_common.sh@10 -- # set +x 00:22:15.671 13:36:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.671 13:36:21 -- host/discovery.sh@109 -- # sleep 1 00:22:17.048 13:36:22 -- host/discovery.sh@110 -- # get_bdev_list 00:22:17.048 13:36:22 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:17.048 13:36:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.048 13:36:22 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:17.048 13:36:22 -- common/autotest_common.sh@10 -- # set +x 00:22:17.048 13:36:22 -- host/discovery.sh@55 -- # sort 00:22:17.048 13:36:22 -- host/discovery.sh@55 -- # xargs 00:22:17.048 13:36:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.048 13:36:22 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:17.048 13:36:22 -- host/discovery.sh@111 -- # get_notification_count 00:22:17.048 13:36:22 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:17.048 13:36:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.048 13:36:22 -- host/discovery.sh@74 -- # jq '. | length' 00:22:17.048 13:36:22 -- common/autotest_common.sh@10 -- # set +x 00:22:17.048 13:36:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.048 13:36:22 -- host/discovery.sh@74 -- # notification_count=1 00:22:17.048 13:36:22 -- host/discovery.sh@75 -- # notify_id=2 00:22:17.048 13:36:22 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:22:17.048 13:36:22 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:17.048 13:36:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.048 13:36:22 -- common/autotest_common.sh@10 -- # set +x 00:22:17.048 [2024-12-15 13:36:22.429479] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:17.048 [2024-12-15 13:36:22.429998] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:17.048 [2024-12-15 13:36:22.430031] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:17.048 13:36:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.048 13:36:22 -- host/discovery.sh@117 -- # sleep 1 00:22:17.048 [2024-12-15 13:36:22.516107] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:17.048 [2024-12-15 13:36:22.575321] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:17.048 [2024-12-15 13:36:22.575345] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:17.048 [2024-12-15 13:36:22.575367] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:17.985 13:36:23 -- host/discovery.sh@118 -- # get_subsystem_names 00:22:17.985 13:36:23 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:17.985 13:36:23 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:17.985 13:36:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.985 13:36:23 -- common/autotest_common.sh@10 -- # set +x 00:22:17.985 13:36:23 -- host/discovery.sh@59 -- # sort 00:22:17.985 13:36:23 -- host/discovery.sh@59 -- # xargs 00:22:17.985 13:36:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.985 13:36:23 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.985 13:36:23 -- host/discovery.sh@119 -- # get_bdev_list 00:22:17.985 13:36:23 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:17.985 13:36:23 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:17.985 13:36:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.985 13:36:23 -- common/autotest_common.sh@10 -- # set +x 00:22:17.985 13:36:23 -- host/discovery.sh@55 -- # sort 00:22:17.985 13:36:23 -- host/discovery.sh@55 -- # xargs 00:22:17.985 13:36:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.985 13:36:23 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:17.985 13:36:23 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:22:17.985 13:36:23 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:17.985 13:36:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.985 13:36:23 -- common/autotest_common.sh@10 -- # set +x 00:22:17.985 13:36:23 -- host/discovery.sh@63 -- # sort -n 00:22:17.985 13:36:23 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:17.985 13:36:23 -- host/discovery.sh@63 -- # xargs 00:22:17.985 13:36:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.985 13:36:23 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:17.985 13:36:23 -- host/discovery.sh@121 -- # get_notification_count 00:22:17.985 13:36:23 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:17.985 13:36:23 -- host/discovery.sh@74 -- # jq '. | length' 00:22:17.985 13:36:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.985 13:36:23 -- common/autotest_common.sh@10 -- # set +x 00:22:17.985 13:36:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.985 13:36:23 -- host/discovery.sh@74 -- # notification_count=0 00:22:17.985 13:36:23 -- host/discovery.sh@75 -- # notify_id=2 00:22:17.985 13:36:23 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:22:17.985 13:36:23 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:17.985 13:36:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.985 13:36:23 -- common/autotest_common.sh@10 -- # set +x 00:22:17.985 [2024-12-15 13:36:23.670248] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:17.985 [2024-12-15 13:36:23.670291] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:17.985 [2024-12-15 13:36:23.672772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.985 [2024-12-15 13:36:23.672819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.985 [2024-12-15 13:36:23.672845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.985 [2024-12-15 13:36:23.672854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.985 [2024-12-15 13:36:23.672862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.985 [2024-12-15 13:36:23.672870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.985 [2024-12-15 13:36:23.672879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.985 [2024-12-15 13:36:23.672887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.985 [2024-12-15 13:36:23.672895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4dc570 is same with the state(5) to be set 00:22:18.244 13:36:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.244 13:36:23 -- host/discovery.sh@127 -- # sleep 1 00:22:18.244 [2024-12-15 13:36:23.682740] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4dc570 (9): Bad file descriptor 00:22:18.244 [2024-12-15 13:36:23.692766] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:18.244 [2024-12-15 13:36:23.692886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.244 [2024-12-15 13:36:23.692930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.244 [2024-12-15 13:36:23.692945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4dc570 with addr=10.0.0.2, port=4420 00:22:18.244 [2024-12-15 13:36:23.692955] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4dc570 is same with the state(5) to be set 00:22:18.244 [2024-12-15 13:36:23.692977] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4dc570 (9): Bad file descriptor 00:22:18.244 [2024-12-15 13:36:23.692990] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:18.244 [2024-12-15 13:36:23.693013] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:18.244 [2024-12-15 13:36:23.693023] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:18.244 [2024-12-15 13:36:23.693063] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:18.244 [2024-12-15 13:36:23.702846] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:18.244 [2024-12-15 13:36:23.702951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.244 [2024-12-15 13:36:23.702991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.244 [2024-12-15 13:36:23.703004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4dc570 with addr=10.0.0.2, port=4420 00:22:18.244 [2024-12-15 13:36:23.703013] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4dc570 is same with the state(5) to be set 00:22:18.244 [2024-12-15 13:36:23.703026] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4dc570 (9): Bad file descriptor 00:22:18.244 [2024-12-15 13:36:23.703039] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:18.244 [2024-12-15 13:36:23.703046] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:18.244 [2024-12-15 13:36:23.703054] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:18.244 [2024-12-15 13:36:23.703067] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:18.244 [2024-12-15 13:36:23.712909] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:18.244 [2024-12-15 13:36:23.713032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.244 [2024-12-15 13:36:23.713072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.244 [2024-12-15 13:36:23.713086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4dc570 with addr=10.0.0.2, port=4420 00:22:18.244 [2024-12-15 13:36:23.713095] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4dc570 is same with the state(5) to be set 00:22:18.244 [2024-12-15 13:36:23.713108] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4dc570 (9): Bad file descriptor 00:22:18.244 [2024-12-15 13:36:23.713120] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:18.245 [2024-12-15 13:36:23.713128] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:18.245 [2024-12-15 13:36:23.713135] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:18.245 [2024-12-15 13:36:23.713148] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:18.245 [2024-12-15 13:36:23.722989] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:18.245 [2024-12-15 13:36:23.723107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.245 [2024-12-15 13:36:23.723146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.245 [2024-12-15 13:36:23.723160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4dc570 with addr=10.0.0.2, port=4420 00:22:18.245 [2024-12-15 13:36:23.723169] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4dc570 is same with the state(5) to be set 00:22:18.245 [2024-12-15 13:36:23.723183] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4dc570 (9): Bad file descriptor 00:22:18.245 [2024-12-15 13:36:23.723195] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:18.245 [2024-12-15 13:36:23.723203] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:18.245 [2024-12-15 13:36:23.723210] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:18.245 [2024-12-15 13:36:23.723222] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:18.245 [2024-12-15 13:36:23.733076] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:18.245 [2024-12-15 13:36:23.733178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.245 [2024-12-15 13:36:23.733216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.245 [2024-12-15 13:36:23.733229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4dc570 with addr=10.0.0.2, port=4420 00:22:18.245 [2024-12-15 13:36:23.733238] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4dc570 is same with the state(5) to be set 00:22:18.245 [2024-12-15 13:36:23.733252] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4dc570 (9): Bad file descriptor 00:22:18.245 [2024-12-15 13:36:23.733263] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:18.245 [2024-12-15 13:36:23.733271] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:18.245 [2024-12-15 13:36:23.733279] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:18.245 [2024-12-15 13:36:23.733291] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:18.245 [2024-12-15 13:36:23.743152] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:18.245 [2024-12-15 13:36:23.743254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.245 [2024-12-15 13:36:23.743295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.245 [2024-12-15 13:36:23.743308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4dc570 with addr=10.0.0.2, port=4420 00:22:18.245 [2024-12-15 13:36:23.743317] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4dc570 is same with the state(5) to be set 00:22:18.245 [2024-12-15 13:36:23.743330] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4dc570 (9): Bad file descriptor 00:22:18.245 [2024-12-15 13:36:23.743352] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:18.245 [2024-12-15 13:36:23.743361] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:18.245 [2024-12-15 13:36:23.743369] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:18.245 [2024-12-15 13:36:23.743380] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:18.245 [2024-12-15 13:36:23.753214] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:18.245 [2024-12-15 13:36:23.753315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.245 [2024-12-15 13:36:23.753353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.245 [2024-12-15 13:36:23.753367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4dc570 with addr=10.0.0.2, port=4420 00:22:18.245 [2024-12-15 13:36:23.753376] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4dc570 is same with the state(5) to be set 00:22:18.245 [2024-12-15 13:36:23.753389] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4dc570 (9): Bad file descriptor 00:22:18.245 [2024-12-15 13:36:23.753410] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:18.245 [2024-12-15 13:36:23.753419] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:18.245 [2024-12-15 13:36:23.753427] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:18.245 [2024-12-15 13:36:23.753439] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:18.245 [2024-12-15 13:36:23.756416] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:18.245 [2024-12-15 13:36:23.756457] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:19.181 13:36:24 -- host/discovery.sh@128 -- # get_subsystem_names 00:22:19.181 13:36:24 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:19.181 13:36:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.181 13:36:24 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:19.181 13:36:24 -- common/autotest_common.sh@10 -- # set +x 00:22:19.181 13:36:24 -- host/discovery.sh@59 -- # sort 00:22:19.181 13:36:24 -- host/discovery.sh@59 -- # xargs 00:22:19.181 13:36:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.181 13:36:24 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.181 13:36:24 -- host/discovery.sh@129 -- # get_bdev_list 00:22:19.181 13:36:24 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:19.181 13:36:24 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:19.181 13:36:24 -- host/discovery.sh@55 -- # xargs 00:22:19.181 13:36:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.181 13:36:24 -- common/autotest_common.sh@10 -- # set +x 00:22:19.181 13:36:24 -- host/discovery.sh@55 -- # sort 00:22:19.181 13:36:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.181 13:36:24 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:19.181 13:36:24 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:22:19.181 13:36:24 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:19.181 13:36:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.181 13:36:24 -- common/autotest_common.sh@10 -- # set +x 00:22:19.181 13:36:24 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:19.181 13:36:24 -- host/discovery.sh@63 -- # sort -n 00:22:19.181 13:36:24 -- host/discovery.sh@63 -- # xargs 00:22:19.181 13:36:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.181 13:36:24 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:22:19.182 13:36:24 -- host/discovery.sh@131 -- # get_notification_count 00:22:19.182 13:36:24 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:19.182 13:36:24 -- host/discovery.sh@74 -- # jq '. | length' 00:22:19.182 13:36:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.182 13:36:24 -- common/autotest_common.sh@10 -- # set +x 00:22:19.182 13:36:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.440 13:36:24 -- host/discovery.sh@74 -- # notification_count=0 00:22:19.440 13:36:24 -- host/discovery.sh@75 -- # notify_id=2 00:22:19.440 13:36:24 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:22:19.440 13:36:24 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:19.440 13:36:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.440 13:36:24 -- common/autotest_common.sh@10 -- # set +x 00:22:19.440 13:36:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.440 13:36:24 -- host/discovery.sh@135 -- # sleep 1 00:22:20.375 13:36:25 -- host/discovery.sh@136 -- # get_subsystem_names 00:22:20.375 13:36:25 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:20.375 13:36:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.375 13:36:25 -- common/autotest_common.sh@10 -- # set +x 00:22:20.375 13:36:25 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:20.375 13:36:25 -- host/discovery.sh@59 -- # sort 00:22:20.375 13:36:25 -- host/discovery.sh@59 -- # xargs 00:22:20.375 13:36:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.375 13:36:25 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:22:20.375 13:36:25 -- host/discovery.sh@137 -- # get_bdev_list 00:22:20.376 13:36:25 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:20.376 13:36:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.376 13:36:25 -- common/autotest_common.sh@10 -- # set +x 00:22:20.376 13:36:25 -- host/discovery.sh@55 -- # sort 00:22:20.376 13:36:25 -- host/discovery.sh@55 -- # xargs 00:22:20.376 13:36:25 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:20.376 13:36:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.376 13:36:26 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:22:20.376 13:36:26 -- host/discovery.sh@138 -- # get_notification_count 00:22:20.376 13:36:26 -- host/discovery.sh@74 -- # jq '. | length' 00:22:20.376 13:36:26 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:20.376 13:36:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.376 13:36:26 -- common/autotest_common.sh@10 -- # set +x 00:22:20.376 13:36:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.634 13:36:26 -- host/discovery.sh@74 -- # notification_count=2 00:22:20.634 13:36:26 -- host/discovery.sh@75 -- # notify_id=4 00:22:20.634 13:36:26 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:22:20.635 13:36:26 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:20.635 13:36:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.635 13:36:26 -- common/autotest_common.sh@10 -- # set +x 00:22:21.571 [2024-12-15 13:36:27.093338] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:21.571 [2024-12-15 13:36:27.093359] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:21.571 [2024-12-15 13:36:27.093391] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:21.571 [2024-12-15 13:36:27.179426] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:21.571 [2024-12-15 13:36:27.238478] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:21.571 [2024-12-15 13:36:27.238528] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:21.571 13:36:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.571 13:36:27 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:21.571 13:36:27 -- common/autotest_common.sh@650 -- # local es=0 00:22:21.571 13:36:27 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:21.571 13:36:27 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:21.571 13:36:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:21.571 13:36:27 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:21.571 13:36:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:21.571 13:36:27 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:21.571 13:36:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.571 13:36:27 -- common/autotest_common.sh@10 -- # set +x 00:22:21.571 2024/12/15 13:36:27 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:21.571 request: 00:22:21.571 { 00:22:21.571 "method": "bdev_nvme_start_discovery", 00:22:21.571 "params": { 00:22:21.571 "name": "nvme", 00:22:21.830 "trtype": "tcp", 00:22:21.830 "traddr": "10.0.0.2", 00:22:21.830 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:21.830 "adrfam": "ipv4", 00:22:21.830 "trsvcid": "8009", 00:22:21.830 "wait_for_attach": true 00:22:21.830 } 00:22:21.830 } 00:22:21.830 Got JSON-RPC error response 00:22:21.830 GoRPCClient: error on JSON-RPC call 00:22:21.830 13:36:27 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:21.830 13:36:27 -- common/autotest_common.sh@653 -- # es=1 00:22:21.830 13:36:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:21.830 13:36:27 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:21.830 13:36:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:21.830 13:36:27 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:22:21.830 13:36:27 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:21.830 13:36:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.830 13:36:27 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:21.830 13:36:27 -- common/autotest_common.sh@10 -- # set +x 00:22:21.830 13:36:27 -- host/discovery.sh@67 -- # sort 00:22:21.830 13:36:27 -- host/discovery.sh@67 -- # xargs 00:22:21.830 13:36:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.830 13:36:27 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:22:21.830 13:36:27 -- host/discovery.sh@147 -- # get_bdev_list 00:22:21.830 13:36:27 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:21.830 13:36:27 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:21.830 13:36:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.830 13:36:27 -- host/discovery.sh@55 -- # xargs 00:22:21.830 13:36:27 -- common/autotest_common.sh@10 -- # set +x 00:22:21.830 13:36:27 -- host/discovery.sh@55 -- # sort 00:22:21.830 13:36:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.830 13:36:27 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:21.830 13:36:27 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:21.830 13:36:27 -- common/autotest_common.sh@650 -- # local es=0 00:22:21.830 13:36:27 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:21.830 13:36:27 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:21.830 13:36:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:21.830 13:36:27 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:21.830 13:36:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:21.830 13:36:27 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:21.830 13:36:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.830 13:36:27 -- common/autotest_common.sh@10 -- # set +x 00:22:21.830 2024/12/15 13:36:27 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:21.830 request: 00:22:21.830 { 00:22:21.830 "method": "bdev_nvme_start_discovery", 00:22:21.830 "params": { 00:22:21.830 "name": "nvme_second", 00:22:21.830 "trtype": "tcp", 00:22:21.830 "traddr": "10.0.0.2", 00:22:21.830 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:21.830 "adrfam": "ipv4", 00:22:21.830 "trsvcid": "8009", 00:22:21.830 "wait_for_attach": true 00:22:21.830 } 00:22:21.830 } 00:22:21.830 Got JSON-RPC error response 00:22:21.830 GoRPCClient: error on JSON-RPC call 00:22:21.830 13:36:27 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:21.830 13:36:27 -- common/autotest_common.sh@653 -- # es=1 00:22:21.830 13:36:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:21.830 13:36:27 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:21.830 13:36:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:21.830 13:36:27 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:22:21.830 13:36:27 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:21.830 13:36:27 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:21.830 13:36:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.830 13:36:27 -- common/autotest_common.sh@10 -- # set +x 00:22:21.830 13:36:27 -- host/discovery.sh@67 -- # sort 00:22:21.830 13:36:27 -- host/discovery.sh@67 -- # xargs 00:22:21.830 13:36:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.830 13:36:27 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:22:21.830 13:36:27 -- host/discovery.sh@153 -- # get_bdev_list 00:22:21.830 13:36:27 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:21.830 13:36:27 -- host/discovery.sh@55 -- # sort 00:22:21.830 13:36:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.830 13:36:27 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:21.830 13:36:27 -- common/autotest_common.sh@10 -- # set +x 00:22:21.830 13:36:27 -- host/discovery.sh@55 -- # xargs 00:22:21.830 13:36:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.830 13:36:27 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:21.830 13:36:27 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:21.830 13:36:27 -- common/autotest_common.sh@650 -- # local es=0 00:22:21.830 13:36:27 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:21.830 13:36:27 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:21.830 13:36:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:21.830 13:36:27 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:21.830 13:36:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:21.830 13:36:27 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:21.830 13:36:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.830 13:36:27 -- common/autotest_common.sh@10 -- # set +x 00:22:23.206 [2024-12-15 13:36:28.500622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.206 [2024-12-15 13:36:28.500729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.206 [2024-12-15 13:36:28.500747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x577f80 with addr=10.0.0.2, port=8010 00:22:23.206 [2024-12-15 13:36:28.500766] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:23.206 [2024-12-15 13:36:28.500774] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:23.206 [2024-12-15 13:36:28.500789] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:24.142 [2024-12-15 13:36:29.500578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.142 [2024-12-15 13:36:29.500679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.142 [2024-12-15 13:36:29.500696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x550ca0 with addr=10.0.0.2, port=8010 00:22:24.142 [2024-12-15 13:36:29.500708] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:24.142 [2024-12-15 13:36:29.500716] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:24.142 [2024-12-15 13:36:29.500723] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:25.079 [2024-12-15 13:36:30.500517] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:25.079 2024/12/15 13:36:30 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:22:25.079 request: 00:22:25.079 { 00:22:25.079 "method": "bdev_nvme_start_discovery", 00:22:25.079 "params": { 00:22:25.079 "name": "nvme_second", 00:22:25.079 "trtype": "tcp", 00:22:25.079 "traddr": "10.0.0.2", 00:22:25.079 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:25.079 "adrfam": "ipv4", 00:22:25.079 "trsvcid": "8010", 00:22:25.079 "attach_timeout_ms": 3000 00:22:25.079 } 00:22:25.079 } 00:22:25.079 Got JSON-RPC error response 00:22:25.079 GoRPCClient: error on JSON-RPC call 00:22:25.079 13:36:30 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:25.079 13:36:30 -- common/autotest_common.sh@653 -- # es=1 00:22:25.079 13:36:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:25.079 13:36:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:25.079 13:36:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:25.079 13:36:30 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:22:25.079 13:36:30 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:25.079 13:36:30 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:25.079 13:36:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.079 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:22:25.079 13:36:30 -- host/discovery.sh@67 -- # sort 00:22:25.079 13:36:30 -- host/discovery.sh@67 -- # xargs 00:22:25.079 13:36:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.079 13:36:30 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:22:25.079 13:36:30 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:22:25.079 13:36:30 -- host/discovery.sh@162 -- # kill 96304 00:22:25.079 13:36:30 -- host/discovery.sh@163 -- # nvmftestfini 00:22:25.079 13:36:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:25.079 13:36:30 -- nvmf/common.sh@116 -- # sync 00:22:25.079 13:36:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:25.079 13:36:30 -- nvmf/common.sh@119 -- # set +e 00:22:25.079 13:36:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:25.079 13:36:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:25.079 rmmod nvme_tcp 00:22:25.079 rmmod nvme_fabrics 00:22:25.079 rmmod nvme_keyring 00:22:25.079 13:36:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:25.079 13:36:30 -- nvmf/common.sh@123 -- # set -e 00:22:25.079 13:36:30 -- nvmf/common.sh@124 -- # return 0 00:22:25.079 13:36:30 -- nvmf/common.sh@477 -- # '[' -n 96248 ']' 00:22:25.079 13:36:30 -- nvmf/common.sh@478 -- # killprocess 96248 00:22:25.079 13:36:30 -- common/autotest_common.sh@936 -- # '[' -z 96248 ']' 00:22:25.079 13:36:30 -- common/autotest_common.sh@940 -- # kill -0 96248 00:22:25.079 13:36:30 -- common/autotest_common.sh@941 -- # uname 00:22:25.079 13:36:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:25.079 13:36:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96248 00:22:25.079 13:36:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:25.079 killing process with pid 96248 00:22:25.079 13:36:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:25.080 13:36:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96248' 00:22:25.080 13:36:30 -- common/autotest_common.sh@955 -- # kill 96248 00:22:25.080 13:36:30 -- common/autotest_common.sh@960 -- # wait 96248 00:22:25.339 13:36:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:25.339 13:36:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:25.339 13:36:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:25.339 13:36:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:25.339 13:36:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:25.339 13:36:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.339 13:36:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:25.339 13:36:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.339 13:36:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:25.339 00:22:25.339 real 0m14.150s 00:22:25.339 user 0m27.625s 00:22:25.339 sys 0m1.727s 00:22:25.340 13:36:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:25.340 ************************************ 00:22:25.340 END TEST nvmf_discovery 00:22:25.340 ************************************ 00:22:25.340 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:22:25.340 13:36:31 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:25.340 13:36:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:25.340 13:36:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:25.340 13:36:31 -- common/autotest_common.sh@10 -- # set +x 00:22:25.340 ************************************ 00:22:25.340 START TEST nvmf_discovery_remove_ifc 00:22:25.340 ************************************ 00:22:25.340 13:36:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:25.600 * Looking for test storage... 00:22:25.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:25.600 13:36:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:25.600 13:36:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:25.600 13:36:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:25.600 13:36:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:25.600 13:36:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:25.600 13:36:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:25.600 13:36:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:25.600 13:36:31 -- scripts/common.sh@335 -- # IFS=.-: 00:22:25.600 13:36:31 -- scripts/common.sh@335 -- # read -ra ver1 00:22:25.600 13:36:31 -- scripts/common.sh@336 -- # IFS=.-: 00:22:25.600 13:36:31 -- scripts/common.sh@336 -- # read -ra ver2 00:22:25.600 13:36:31 -- scripts/common.sh@337 -- # local 'op=<' 00:22:25.600 13:36:31 -- scripts/common.sh@339 -- # ver1_l=2 00:22:25.600 13:36:31 -- scripts/common.sh@340 -- # ver2_l=1 00:22:25.600 13:36:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:25.600 13:36:31 -- scripts/common.sh@343 -- # case "$op" in 00:22:25.600 13:36:31 -- scripts/common.sh@344 -- # : 1 00:22:25.600 13:36:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:25.600 13:36:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:25.600 13:36:31 -- scripts/common.sh@364 -- # decimal 1 00:22:25.600 13:36:31 -- scripts/common.sh@352 -- # local d=1 00:22:25.600 13:36:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:25.600 13:36:31 -- scripts/common.sh@354 -- # echo 1 00:22:25.600 13:36:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:25.600 13:36:31 -- scripts/common.sh@365 -- # decimal 2 00:22:25.600 13:36:31 -- scripts/common.sh@352 -- # local d=2 00:22:25.600 13:36:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:25.600 13:36:31 -- scripts/common.sh@354 -- # echo 2 00:22:25.600 13:36:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:25.600 13:36:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:25.600 13:36:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:25.600 13:36:31 -- scripts/common.sh@367 -- # return 0 00:22:25.600 13:36:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:25.600 13:36:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:25.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.600 --rc genhtml_branch_coverage=1 00:22:25.600 --rc genhtml_function_coverage=1 00:22:25.600 --rc genhtml_legend=1 00:22:25.600 --rc geninfo_all_blocks=1 00:22:25.600 --rc geninfo_unexecuted_blocks=1 00:22:25.600 00:22:25.600 ' 00:22:25.600 13:36:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:25.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.600 --rc genhtml_branch_coverage=1 00:22:25.600 --rc genhtml_function_coverage=1 00:22:25.600 --rc genhtml_legend=1 00:22:25.600 --rc geninfo_all_blocks=1 00:22:25.600 --rc geninfo_unexecuted_blocks=1 00:22:25.600 00:22:25.600 ' 00:22:25.600 13:36:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:25.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.600 --rc genhtml_branch_coverage=1 00:22:25.600 --rc genhtml_function_coverage=1 00:22:25.600 --rc genhtml_legend=1 00:22:25.600 --rc geninfo_all_blocks=1 00:22:25.600 --rc geninfo_unexecuted_blocks=1 00:22:25.600 00:22:25.600 ' 00:22:25.600 13:36:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:25.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.600 --rc genhtml_branch_coverage=1 00:22:25.600 --rc genhtml_function_coverage=1 00:22:25.600 --rc genhtml_legend=1 00:22:25.600 --rc geninfo_all_blocks=1 00:22:25.600 --rc geninfo_unexecuted_blocks=1 00:22:25.600 00:22:25.600 ' 00:22:25.600 13:36:31 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:25.600 13:36:31 -- nvmf/common.sh@7 -- # uname -s 00:22:25.600 13:36:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:25.600 13:36:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:25.600 13:36:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:25.600 13:36:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:25.600 13:36:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:25.600 13:36:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:25.600 13:36:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:25.600 13:36:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:25.600 13:36:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:25.600 13:36:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:25.600 13:36:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:22:25.600 13:36:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:22:25.600 13:36:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:25.600 13:36:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:25.600 13:36:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:25.600 13:36:31 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:25.600 13:36:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:25.600 13:36:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:25.600 13:36:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:25.600 13:36:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.600 13:36:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.600 13:36:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.600 13:36:31 -- paths/export.sh@5 -- # export PATH 00:22:25.600 13:36:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.600 13:36:31 -- nvmf/common.sh@46 -- # : 0 00:22:25.600 13:36:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:25.600 13:36:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:25.600 13:36:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:25.600 13:36:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:25.600 13:36:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:25.600 13:36:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:25.600 13:36:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:25.600 13:36:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:25.600 13:36:31 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:25.600 13:36:31 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:25.600 13:36:31 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:25.600 13:36:31 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:25.600 13:36:31 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:25.600 13:36:31 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:25.600 13:36:31 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:25.600 13:36:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:25.600 13:36:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:25.600 13:36:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:25.600 13:36:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:25.600 13:36:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:25.600 13:36:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.600 13:36:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:25.600 13:36:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.600 13:36:31 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:25.600 13:36:31 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:25.600 13:36:31 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:25.600 13:36:31 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:25.600 13:36:31 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:25.600 13:36:31 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:25.600 13:36:31 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:25.601 13:36:31 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:25.601 13:36:31 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:25.601 13:36:31 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:25.601 13:36:31 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:25.601 13:36:31 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:25.601 13:36:31 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:25.601 13:36:31 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:25.601 13:36:31 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:25.601 13:36:31 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:25.601 13:36:31 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:25.601 13:36:31 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:25.601 13:36:31 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:25.601 13:36:31 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:25.601 Cannot find device "nvmf_tgt_br" 00:22:25.601 13:36:31 -- nvmf/common.sh@154 -- # true 00:22:25.601 13:36:31 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:25.601 Cannot find device "nvmf_tgt_br2" 00:22:25.601 13:36:31 -- nvmf/common.sh@155 -- # true 00:22:25.601 13:36:31 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:25.601 13:36:31 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:25.601 Cannot find device "nvmf_tgt_br" 00:22:25.601 13:36:31 -- nvmf/common.sh@157 -- # true 00:22:25.601 13:36:31 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:25.860 Cannot find device "nvmf_tgt_br2" 00:22:25.860 13:36:31 -- nvmf/common.sh@158 -- # true 00:22:25.860 13:36:31 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:25.860 13:36:31 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:25.860 13:36:31 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:25.860 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:25.860 13:36:31 -- nvmf/common.sh@161 -- # true 00:22:25.860 13:36:31 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:25.860 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:25.860 13:36:31 -- nvmf/common.sh@162 -- # true 00:22:25.860 13:36:31 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:25.860 13:36:31 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:25.860 13:36:31 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:25.860 13:36:31 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:25.860 13:36:31 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:25.860 13:36:31 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:25.860 13:36:31 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:25.860 13:36:31 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:25.860 13:36:31 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:25.860 13:36:31 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:25.860 13:36:31 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:25.860 13:36:31 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:25.860 13:36:31 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:25.860 13:36:31 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:25.860 13:36:31 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:25.860 13:36:31 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:25.860 13:36:31 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:25.860 13:36:31 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:25.860 13:36:31 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:25.860 13:36:31 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:25.860 13:36:31 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:25.860 13:36:31 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:25.860 13:36:31 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:25.860 13:36:31 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:25.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:25.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:22:25.860 00:22:25.860 --- 10.0.0.2 ping statistics --- 00:22:25.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.860 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:22:25.860 13:36:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:25.860 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:25.860 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:22:25.860 00:22:25.860 --- 10.0.0.3 ping statistics --- 00:22:25.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.860 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:22:25.860 13:36:31 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:25.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:25.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:22:25.860 00:22:25.860 --- 10.0.0.1 ping statistics --- 00:22:25.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.860 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:22:25.860 13:36:31 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:25.860 13:36:31 -- nvmf/common.sh@421 -- # return 0 00:22:25.860 13:36:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:25.860 13:36:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:25.860 13:36:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:25.860 13:36:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:25.860 13:36:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:25.860 13:36:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:25.860 13:36:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:25.860 13:36:31 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:25.860 13:36:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:25.860 13:36:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:25.860 13:36:31 -- common/autotest_common.sh@10 -- # set +x 00:22:25.860 13:36:31 -- nvmf/common.sh@469 -- # nvmfpid=96814 00:22:25.860 13:36:31 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:25.860 13:36:31 -- nvmf/common.sh@470 -- # waitforlisten 96814 00:22:25.860 13:36:31 -- common/autotest_common.sh@829 -- # '[' -z 96814 ']' 00:22:25.860 13:36:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.860 13:36:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:25.860 13:36:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.861 13:36:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:25.861 13:36:31 -- common/autotest_common.sh@10 -- # set +x 00:22:26.120 [2024-12-15 13:36:31.590150] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:26.120 [2024-12-15 13:36:31.590242] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.120 [2024-12-15 13:36:31.730352] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.120 [2024-12-15 13:36:31.795319] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:26.120 [2024-12-15 13:36:31.795452] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:26.120 [2024-12-15 13:36:31.795464] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:26.120 [2024-12-15 13:36:31.795471] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:26.120 [2024-12-15 13:36:31.795493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.057 13:36:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:27.057 13:36:32 -- common/autotest_common.sh@862 -- # return 0 00:22:27.057 13:36:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:27.057 13:36:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:27.057 13:36:32 -- common/autotest_common.sh@10 -- # set +x 00:22:27.057 13:36:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:27.057 13:36:32 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:27.057 13:36:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.057 13:36:32 -- common/autotest_common.sh@10 -- # set +x 00:22:27.057 [2024-12-15 13:36:32.651015] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.057 [2024-12-15 13:36:32.659100] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:27.057 null0 00:22:27.057 [2024-12-15 13:36:32.691031] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:27.057 13:36:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.057 13:36:32 -- host/discovery_remove_ifc.sh@59 -- # hostpid=96864 00:22:27.057 13:36:32 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:27.057 13:36:32 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 96864 /tmp/host.sock 00:22:27.057 13:36:32 -- common/autotest_common.sh@829 -- # '[' -z 96864 ']' 00:22:27.057 13:36:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:27.057 13:36:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:27.057 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:27.057 13:36:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:27.057 13:36:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:27.057 13:36:32 -- common/autotest_common.sh@10 -- # set +x 00:22:27.316 [2024-12-15 13:36:32.766936] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:27.316 [2024-12-15 13:36:32.767031] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96864 ] 00:22:27.316 [2024-12-15 13:36:32.906773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.316 [2024-12-15 13:36:32.976463] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:27.316 [2024-12-15 13:36:32.976668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.265 13:36:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:28.265 13:36:33 -- common/autotest_common.sh@862 -- # return 0 00:22:28.265 13:36:33 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:28.265 13:36:33 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:28.265 13:36:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.265 13:36:33 -- common/autotest_common.sh@10 -- # set +x 00:22:28.265 13:36:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.265 13:36:33 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:28.265 13:36:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.265 13:36:33 -- common/autotest_common.sh@10 -- # set +x 00:22:28.265 13:36:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.265 13:36:33 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:28.265 13:36:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.265 13:36:33 -- common/autotest_common.sh@10 -- # set +x 00:22:29.213 [2024-12-15 13:36:34.874053] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:29.213 [2024-12-15 13:36:34.874101] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:29.213 [2024-12-15 13:36:34.874119] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:29.472 [2024-12-15 13:36:34.960153] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:29.472 [2024-12-15 13:36:35.015961] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:29.472 [2024-12-15 13:36:35.016026] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:29.472 [2024-12-15 13:36:35.016053] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:29.472 [2024-12-15 13:36:35.016068] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:29.472 [2024-12-15 13:36:35.016093] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:29.472 13:36:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.472 13:36:35 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:29.472 13:36:35 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:29.472 [2024-12-15 13:36:35.022634] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xe7bda0 was disconnected and freed. delete nvme_qpair. 00:22:29.472 13:36:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:29.472 13:36:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.472 13:36:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:29.472 13:36:35 -- common/autotest_common.sh@10 -- # set +x 00:22:29.472 13:36:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:29.472 13:36:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:29.472 13:36:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.472 13:36:35 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:29.472 13:36:35 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:22:29.472 13:36:35 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:22:29.472 13:36:35 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:29.472 13:36:35 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:29.472 13:36:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:29.472 13:36:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:29.472 13:36:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.472 13:36:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:29.472 13:36:35 -- common/autotest_common.sh@10 -- # set +x 00:22:29.472 13:36:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:29.472 13:36:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.472 13:36:35 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:29.472 13:36:35 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:30.849 13:36:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:30.849 13:36:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:30.849 13:36:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:30.849 13:36:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.849 13:36:36 -- common/autotest_common.sh@10 -- # set +x 00:22:30.849 13:36:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:30.849 13:36:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:30.849 13:36:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.849 13:36:36 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:30.849 13:36:36 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:31.784 13:36:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:31.784 13:36:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:31.784 13:36:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.784 13:36:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:31.784 13:36:37 -- common/autotest_common.sh@10 -- # set +x 00:22:31.784 13:36:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:31.784 13:36:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:31.784 13:36:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.784 13:36:37 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:31.784 13:36:37 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:32.722 13:36:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:32.722 13:36:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:32.722 13:36:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.722 13:36:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:32.722 13:36:38 -- common/autotest_common.sh@10 -- # set +x 00:22:32.722 13:36:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:32.722 13:36:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:32.722 13:36:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.722 13:36:38 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:32.722 13:36:38 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:33.658 13:36:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:33.658 13:36:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:33.658 13:36:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.658 13:36:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:33.658 13:36:39 -- common/autotest_common.sh@10 -- # set +x 00:22:33.658 13:36:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:33.658 13:36:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:33.917 13:36:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.917 13:36:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:33.917 13:36:39 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:34.852 13:36:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:34.852 13:36:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:34.852 13:36:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.852 13:36:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:34.852 13:36:40 -- common/autotest_common.sh@10 -- # set +x 00:22:34.852 13:36:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:34.852 13:36:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:34.852 13:36:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.852 [2024-12-15 13:36:40.443893] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:34.852 [2024-12-15 13:36:40.443988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.852 [2024-12-15 13:36:40.444018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.852 [2024-12-15 13:36:40.444030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.852 [2024-12-15 13:36:40.444039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.852 [2024-12-15 13:36:40.444049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.852 [2024-12-15 13:36:40.444058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.852 [2024-12-15 13:36:40.444067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.852 [2024-12-15 13:36:40.444076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.852 [2024-12-15 13:36:40.444085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.852 [2024-12-15 13:36:40.444094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.852 [2024-12-15 13:36:40.444102] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde5690 is same with the state(5) to be set 00:22:34.852 13:36:40 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:34.852 13:36:40 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:34.852 [2024-12-15 13:36:40.453890] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde5690 (9): Bad file descriptor 00:22:34.852 [2024-12-15 13:36:40.463916] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:35.788 13:36:41 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:35.788 13:36:41 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:35.788 13:36:41 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:35.788 13:36:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.788 13:36:41 -- common/autotest_common.sh@10 -- # set +x 00:22:35.788 13:36:41 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:35.788 13:36:41 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:36.046 [2024-12-15 13:36:41.479694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:36.982 [2024-12-15 13:36:42.503705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:36.982 [2024-12-15 13:36:42.503824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde5690 with addr=10.0.0.2, port=4420 00:22:36.982 [2024-12-15 13:36:42.503854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde5690 is same with the state(5) to be set 00:22:36.982 [2024-12-15 13:36:42.503902] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:36.982 [2024-12-15 13:36:42.503921] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:36.982 [2024-12-15 13:36:42.503935] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:36.982 [2024-12-15 13:36:42.503951] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:36.982 [2024-12-15 13:36:42.504683] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde5690 (9): Bad file descriptor 00:22:36.982 [2024-12-15 13:36:42.504740] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:36.982 [2024-12-15 13:36:42.504785] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:36.982 [2024-12-15 13:36:42.504845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.982 [2024-12-15 13:36:42.504870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.982 [2024-12-15 13:36:42.504895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.982 [2024-12-15 13:36:42.504913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.982 [2024-12-15 13:36:42.504932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.982 [2024-12-15 13:36:42.504949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.982 [2024-12-15 13:36:42.504968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.982 [2024-12-15 13:36:42.504985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.982 [2024-12-15 13:36:42.505004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.982 [2024-12-15 13:36:42.505021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.982 [2024-12-15 13:36:42.505039] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:36.982 [2024-12-15 13:36:42.505104] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe43410 (9): Bad file descriptor 00:22:36.982 [2024-12-15 13:36:42.506094] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:36.982 [2024-12-15 13:36:42.506122] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:36.982 13:36:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.982 13:36:42 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:36.982 13:36:42 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:37.918 13:36:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:37.918 13:36:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:37.918 13:36:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:37.918 13:36:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.918 13:36:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:37.918 13:36:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:37.918 13:36:43 -- common/autotest_common.sh@10 -- # set +x 00:22:37.918 13:36:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.918 13:36:43 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:37.918 13:36:43 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:37.918 13:36:43 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:38.176 13:36:43 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:38.176 13:36:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:38.176 13:36:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:38.176 13:36:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.176 13:36:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:38.176 13:36:43 -- common/autotest_common.sh@10 -- # set +x 00:22:38.176 13:36:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:38.176 13:36:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:38.176 13:36:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.176 13:36:43 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:38.176 13:36:43 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:39.119 [2024-12-15 13:36:44.515562] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:39.119 [2024-12-15 13:36:44.515604] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:39.119 [2024-12-15 13:36:44.515637] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:39.119 [2024-12-15 13:36:44.602697] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:39.119 [2024-12-15 13:36:44.657744] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:39.119 [2024-12-15 13:36:44.657786] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:39.119 [2024-12-15 13:36:44.657808] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:39.119 [2024-12-15 13:36:44.657823] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:39.119 [2024-12-15 13:36:44.657831] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:39.119 [2024-12-15 13:36:44.664111] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xe490c0 was disconnected and freed. delete nvme_qpair. 00:22:39.120 13:36:44 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:39.120 13:36:44 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:39.120 13:36:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.120 13:36:44 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:39.120 13:36:44 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:39.120 13:36:44 -- common/autotest_common.sh@10 -- # set +x 00:22:39.120 13:36:44 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:39.120 13:36:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.120 13:36:44 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:39.120 13:36:44 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:39.120 13:36:44 -- host/discovery_remove_ifc.sh@90 -- # killprocess 96864 00:22:39.120 13:36:44 -- common/autotest_common.sh@936 -- # '[' -z 96864 ']' 00:22:39.120 13:36:44 -- common/autotest_common.sh@940 -- # kill -0 96864 00:22:39.120 13:36:44 -- common/autotest_common.sh@941 -- # uname 00:22:39.120 13:36:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:39.120 13:36:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96864 00:22:39.120 killing process with pid 96864 00:22:39.120 13:36:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:39.120 13:36:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:39.120 13:36:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96864' 00:22:39.120 13:36:44 -- common/autotest_common.sh@955 -- # kill 96864 00:22:39.120 13:36:44 -- common/autotest_common.sh@960 -- # wait 96864 00:22:39.379 13:36:44 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:39.379 13:36:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:39.379 13:36:44 -- nvmf/common.sh@116 -- # sync 00:22:39.379 13:36:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:39.379 13:36:45 -- nvmf/common.sh@119 -- # set +e 00:22:39.379 13:36:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:39.379 13:36:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:39.379 rmmod nvme_tcp 00:22:39.379 rmmod nvme_fabrics 00:22:39.379 rmmod nvme_keyring 00:22:39.636 13:36:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:39.637 13:36:45 -- nvmf/common.sh@123 -- # set -e 00:22:39.637 13:36:45 -- nvmf/common.sh@124 -- # return 0 00:22:39.637 13:36:45 -- nvmf/common.sh@477 -- # '[' -n 96814 ']' 00:22:39.637 13:36:45 -- nvmf/common.sh@478 -- # killprocess 96814 00:22:39.637 13:36:45 -- common/autotest_common.sh@936 -- # '[' -z 96814 ']' 00:22:39.637 13:36:45 -- common/autotest_common.sh@940 -- # kill -0 96814 00:22:39.637 13:36:45 -- common/autotest_common.sh@941 -- # uname 00:22:39.637 13:36:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:39.637 13:36:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96814 00:22:39.637 killing process with pid 96814 00:22:39.637 13:36:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:39.637 13:36:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:39.637 13:36:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96814' 00:22:39.637 13:36:45 -- common/autotest_common.sh@955 -- # kill 96814 00:22:39.637 13:36:45 -- common/autotest_common.sh@960 -- # wait 96814 00:22:39.637 13:36:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:39.637 13:36:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:39.637 13:36:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:39.637 13:36:45 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:39.637 13:36:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:39.637 13:36:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.637 13:36:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:39.637 13:36:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.896 13:36:45 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:39.896 ************************************ 00:22:39.896 END TEST nvmf_discovery_remove_ifc 00:22:39.896 ************************************ 00:22:39.896 00:22:39.896 real 0m14.333s 00:22:39.896 user 0m24.733s 00:22:39.896 sys 0m1.555s 00:22:39.896 13:36:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:39.896 13:36:45 -- common/autotest_common.sh@10 -- # set +x 00:22:39.896 13:36:45 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:22:39.896 13:36:45 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:39.896 13:36:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:39.896 13:36:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:39.896 13:36:45 -- common/autotest_common.sh@10 -- # set +x 00:22:39.896 ************************************ 00:22:39.896 START TEST nvmf_digest 00:22:39.896 ************************************ 00:22:39.896 13:36:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:39.896 * Looking for test storage... 00:22:39.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:39.896 13:36:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:39.896 13:36:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:39.896 13:36:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:39.896 13:36:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:39.896 13:36:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:39.896 13:36:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:39.896 13:36:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:39.896 13:36:45 -- scripts/common.sh@335 -- # IFS=.-: 00:22:39.896 13:36:45 -- scripts/common.sh@335 -- # read -ra ver1 00:22:39.896 13:36:45 -- scripts/common.sh@336 -- # IFS=.-: 00:22:39.896 13:36:45 -- scripts/common.sh@336 -- # read -ra ver2 00:22:39.896 13:36:45 -- scripts/common.sh@337 -- # local 'op=<' 00:22:39.896 13:36:45 -- scripts/common.sh@339 -- # ver1_l=2 00:22:39.896 13:36:45 -- scripts/common.sh@340 -- # ver2_l=1 00:22:39.896 13:36:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:39.896 13:36:45 -- scripts/common.sh@343 -- # case "$op" in 00:22:39.896 13:36:45 -- scripts/common.sh@344 -- # : 1 00:22:39.896 13:36:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:39.896 13:36:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:39.896 13:36:45 -- scripts/common.sh@364 -- # decimal 1 00:22:39.896 13:36:45 -- scripts/common.sh@352 -- # local d=1 00:22:39.896 13:36:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:39.896 13:36:45 -- scripts/common.sh@354 -- # echo 1 00:22:39.896 13:36:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:39.896 13:36:45 -- scripts/common.sh@365 -- # decimal 2 00:22:39.896 13:36:45 -- scripts/common.sh@352 -- # local d=2 00:22:39.896 13:36:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:39.896 13:36:45 -- scripts/common.sh@354 -- # echo 2 00:22:39.896 13:36:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:39.896 13:36:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:39.896 13:36:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:39.896 13:36:45 -- scripts/common.sh@367 -- # return 0 00:22:39.896 13:36:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:39.896 13:36:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:39.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.896 --rc genhtml_branch_coverage=1 00:22:39.896 --rc genhtml_function_coverage=1 00:22:39.896 --rc genhtml_legend=1 00:22:39.896 --rc geninfo_all_blocks=1 00:22:39.896 --rc geninfo_unexecuted_blocks=1 00:22:39.896 00:22:39.896 ' 00:22:39.896 13:36:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:39.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.896 --rc genhtml_branch_coverage=1 00:22:39.896 --rc genhtml_function_coverage=1 00:22:39.896 --rc genhtml_legend=1 00:22:39.896 --rc geninfo_all_blocks=1 00:22:39.896 --rc geninfo_unexecuted_blocks=1 00:22:39.896 00:22:39.896 ' 00:22:39.896 13:36:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:39.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.896 --rc genhtml_branch_coverage=1 00:22:39.896 --rc genhtml_function_coverage=1 00:22:39.896 --rc genhtml_legend=1 00:22:39.896 --rc geninfo_all_blocks=1 00:22:39.896 --rc geninfo_unexecuted_blocks=1 00:22:39.896 00:22:39.896 ' 00:22:39.896 13:36:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:39.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.896 --rc genhtml_branch_coverage=1 00:22:39.896 --rc genhtml_function_coverage=1 00:22:39.896 --rc genhtml_legend=1 00:22:39.896 --rc geninfo_all_blocks=1 00:22:39.896 --rc geninfo_unexecuted_blocks=1 00:22:39.896 00:22:39.896 ' 00:22:39.896 13:36:45 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:39.896 13:36:45 -- nvmf/common.sh@7 -- # uname -s 00:22:40.156 13:36:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:40.156 13:36:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:40.156 13:36:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:40.156 13:36:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:40.156 13:36:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:40.156 13:36:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:40.156 13:36:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:40.156 13:36:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:40.156 13:36:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:40.156 13:36:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:40.156 13:36:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:22:40.156 13:36:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:22:40.156 13:36:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:40.156 13:36:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:40.156 13:36:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:40.156 13:36:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:40.156 13:36:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:40.156 13:36:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:40.156 13:36:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:40.156 13:36:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.156 13:36:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.156 13:36:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.156 13:36:45 -- paths/export.sh@5 -- # export PATH 00:22:40.156 13:36:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.156 13:36:45 -- nvmf/common.sh@46 -- # : 0 00:22:40.156 13:36:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:40.156 13:36:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:40.156 13:36:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:40.156 13:36:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:40.156 13:36:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:40.156 13:36:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:40.156 13:36:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:40.156 13:36:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:40.156 13:36:45 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:40.156 13:36:45 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:40.156 13:36:45 -- host/digest.sh@16 -- # runtime=2 00:22:40.156 13:36:45 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:22:40.156 13:36:45 -- host/digest.sh@132 -- # nvmftestinit 00:22:40.156 13:36:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:40.156 13:36:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:40.156 13:36:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:40.156 13:36:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:40.156 13:36:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:40.156 13:36:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.156 13:36:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:40.156 13:36:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.156 13:36:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:40.156 13:36:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:40.156 13:36:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:40.156 13:36:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:40.156 13:36:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:40.156 13:36:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:40.156 13:36:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.156 13:36:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.156 13:36:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:40.156 13:36:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:40.156 13:36:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:40.156 13:36:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:40.156 13:36:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:40.156 13:36:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.156 13:36:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:40.156 13:36:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:40.156 13:36:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:40.156 13:36:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:40.156 13:36:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:40.156 13:36:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:40.156 Cannot find device "nvmf_tgt_br" 00:22:40.156 13:36:45 -- nvmf/common.sh@154 -- # true 00:22:40.156 13:36:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:40.156 Cannot find device "nvmf_tgt_br2" 00:22:40.156 13:36:45 -- nvmf/common.sh@155 -- # true 00:22:40.156 13:36:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:40.156 13:36:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:40.156 Cannot find device "nvmf_tgt_br" 00:22:40.156 13:36:45 -- nvmf/common.sh@157 -- # true 00:22:40.156 13:36:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:40.156 Cannot find device "nvmf_tgt_br2" 00:22:40.156 13:36:45 -- nvmf/common.sh@158 -- # true 00:22:40.156 13:36:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:40.156 13:36:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:40.156 13:36:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:40.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:40.156 13:36:45 -- nvmf/common.sh@161 -- # true 00:22:40.156 13:36:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:40.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:40.156 13:36:45 -- nvmf/common.sh@162 -- # true 00:22:40.156 13:36:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:40.156 13:36:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:40.156 13:36:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:40.156 13:36:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:40.156 13:36:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:40.156 13:36:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:40.156 13:36:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:40.156 13:36:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:40.156 13:36:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:40.156 13:36:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:40.156 13:36:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:40.156 13:36:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:40.156 13:36:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:40.156 13:36:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:40.416 13:36:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:40.416 13:36:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:40.416 13:36:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:40.416 13:36:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:40.416 13:36:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:40.416 13:36:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:40.416 13:36:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:40.416 13:36:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:40.416 13:36:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:40.416 13:36:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:40.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:22:40.416 00:22:40.416 --- 10.0.0.2 ping statistics --- 00:22:40.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.416 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:22:40.416 13:36:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:40.416 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:40.416 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:22:40.416 00:22:40.416 --- 10.0.0.3 ping statistics --- 00:22:40.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.416 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:22:40.416 13:36:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:40.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:22:40.416 00:22:40.416 --- 10.0.0.1 ping statistics --- 00:22:40.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.416 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:22:40.416 13:36:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.416 13:36:45 -- nvmf/common.sh@421 -- # return 0 00:22:40.416 13:36:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:40.416 13:36:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.416 13:36:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:40.416 13:36:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:40.416 13:36:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.416 13:36:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:40.416 13:36:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:40.416 13:36:45 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:40.416 13:36:45 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:22:40.416 13:36:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:40.416 13:36:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:40.416 13:36:45 -- common/autotest_common.sh@10 -- # set +x 00:22:40.416 ************************************ 00:22:40.416 START TEST nvmf_digest_clean 00:22:40.416 ************************************ 00:22:40.416 13:36:45 -- common/autotest_common.sh@1114 -- # run_digest 00:22:40.416 13:36:45 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:22:40.416 13:36:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:40.416 13:36:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:40.416 13:36:45 -- common/autotest_common.sh@10 -- # set +x 00:22:40.416 13:36:45 -- nvmf/common.sh@469 -- # nvmfpid=97284 00:22:40.416 13:36:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:40.416 13:36:45 -- nvmf/common.sh@470 -- # waitforlisten 97284 00:22:40.416 13:36:45 -- common/autotest_common.sh@829 -- # '[' -z 97284 ']' 00:22:40.416 13:36:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.416 13:36:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:40.416 13:36:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.416 13:36:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:40.416 13:36:45 -- common/autotest_common.sh@10 -- # set +x 00:22:40.416 [2024-12-15 13:36:46.016073] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:40.416 [2024-12-15 13:36:46.016157] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.675 [2024-12-15 13:36:46.147754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.675 [2024-12-15 13:36:46.211090] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:40.675 [2024-12-15 13:36:46.211237] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.675 [2024-12-15 13:36:46.211250] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.675 [2024-12-15 13:36:46.211258] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.675 [2024-12-15 13:36:46.211281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.612 13:36:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:41.612 13:36:46 -- common/autotest_common.sh@862 -- # return 0 00:22:41.612 13:36:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:41.612 13:36:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:41.612 13:36:46 -- common/autotest_common.sh@10 -- # set +x 00:22:41.612 13:36:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.612 13:36:47 -- host/digest.sh@120 -- # common_target_config 00:22:41.612 13:36:47 -- host/digest.sh@43 -- # rpc_cmd 00:22:41.612 13:36:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.612 13:36:47 -- common/autotest_common.sh@10 -- # set +x 00:22:41.612 null0 00:22:41.612 [2024-12-15 13:36:47.127302] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:41.612 [2024-12-15 13:36:47.151388] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.612 13:36:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.612 13:36:47 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:22:41.612 13:36:47 -- host/digest.sh@77 -- # local rw bs qd 00:22:41.612 13:36:47 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:41.612 13:36:47 -- host/digest.sh@80 -- # rw=randread 00:22:41.612 13:36:47 -- host/digest.sh@80 -- # bs=4096 00:22:41.612 13:36:47 -- host/digest.sh@80 -- # qd=128 00:22:41.612 13:36:47 -- host/digest.sh@82 -- # bperfpid=97340 00:22:41.612 13:36:47 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:41.612 13:36:47 -- host/digest.sh@83 -- # waitforlisten 97340 /var/tmp/bperf.sock 00:22:41.612 13:36:47 -- common/autotest_common.sh@829 -- # '[' -z 97340 ']' 00:22:41.612 13:36:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:41.612 13:36:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:41.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:41.612 13:36:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:41.612 13:36:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:41.612 13:36:47 -- common/autotest_common.sh@10 -- # set +x 00:22:41.612 [2024-12-15 13:36:47.213217] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:41.612 [2024-12-15 13:36:47.213315] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97340 ] 00:22:41.871 [2024-12-15 13:36:47.351863] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.871 [2024-12-15 13:36:47.418763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.871 13:36:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:41.871 13:36:47 -- common/autotest_common.sh@862 -- # return 0 00:22:41.871 13:36:47 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:41.871 13:36:47 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:41.871 13:36:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:42.130 13:36:47 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:42.130 13:36:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:42.697 nvme0n1 00:22:42.697 13:36:48 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:42.697 13:36:48 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:42.697 Running I/O for 2 seconds... 00:22:44.601 00:22:44.601 Latency(us) 00:22:44.601 [2024-12-15T13:36:50.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.601 [2024-12-15T13:36:50.291Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:44.601 nvme0n1 : 2.00 23481.66 91.73 0.00 0.00 5446.00 2368.23 18350.08 00:22:44.601 [2024-12-15T13:36:50.291Z] =================================================================================================================== 00:22:44.601 [2024-12-15T13:36:50.291Z] Total : 23481.66 91.73 0.00 0.00 5446.00 2368.23 18350.08 00:22:44.601 0 00:22:44.601 13:36:50 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:44.601 13:36:50 -- host/digest.sh@92 -- # get_accel_stats 00:22:44.601 13:36:50 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:44.601 13:36:50 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:44.601 13:36:50 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:44.601 | select(.opcode=="crc32c") 00:22:44.601 | "\(.module_name) \(.executed)"' 00:22:44.860 13:36:50 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:44.860 13:36:50 -- host/digest.sh@93 -- # exp_module=software 00:22:44.860 13:36:50 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:44.860 13:36:50 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:44.860 13:36:50 -- host/digest.sh@97 -- # killprocess 97340 00:22:44.860 13:36:50 -- common/autotest_common.sh@936 -- # '[' -z 97340 ']' 00:22:44.860 13:36:50 -- common/autotest_common.sh@940 -- # kill -0 97340 00:22:44.860 13:36:50 -- common/autotest_common.sh@941 -- # uname 00:22:44.860 13:36:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:44.860 13:36:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97340 00:22:44.860 13:36:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:44.860 killing process with pid 97340 00:22:44.860 13:36:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:44.860 13:36:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97340' 00:22:44.860 Received shutdown signal, test time was about 2.000000 seconds 00:22:44.860 00:22:44.860 Latency(us) 00:22:44.860 [2024-12-15T13:36:50.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.861 [2024-12-15T13:36:50.551Z] =================================================================================================================== 00:22:44.861 [2024-12-15T13:36:50.551Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:44.861 13:36:50 -- common/autotest_common.sh@955 -- # kill 97340 00:22:44.861 13:36:50 -- common/autotest_common.sh@960 -- # wait 97340 00:22:45.119 13:36:50 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:22:45.119 13:36:50 -- host/digest.sh@77 -- # local rw bs qd 00:22:45.120 13:36:50 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:45.120 13:36:50 -- host/digest.sh@80 -- # rw=randread 00:22:45.120 13:36:50 -- host/digest.sh@80 -- # bs=131072 00:22:45.120 13:36:50 -- host/digest.sh@80 -- # qd=16 00:22:45.120 13:36:50 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:45.120 13:36:50 -- host/digest.sh@82 -- # bperfpid=97411 00:22:45.120 13:36:50 -- host/digest.sh@83 -- # waitforlisten 97411 /var/tmp/bperf.sock 00:22:45.120 13:36:50 -- common/autotest_common.sh@829 -- # '[' -z 97411 ']' 00:22:45.120 13:36:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:45.120 13:36:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:45.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:45.120 13:36:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:45.120 13:36:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:45.120 13:36:50 -- common/autotest_common.sh@10 -- # set +x 00:22:45.120 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:45.120 Zero copy mechanism will not be used. 00:22:45.120 [2024-12-15 13:36:50.771406] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:45.120 [2024-12-15 13:36:50.771494] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97411 ] 00:22:45.379 [2024-12-15 13:36:50.903542] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.379 [2024-12-15 13:36:50.963278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.379 13:36:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:45.379 13:36:51 -- common/autotest_common.sh@862 -- # return 0 00:22:45.379 13:36:51 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:45.379 13:36:51 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:45.379 13:36:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:45.947 13:36:51 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:45.947 13:36:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:46.206 nvme0n1 00:22:46.206 13:36:51 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:46.206 13:36:51 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:46.206 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:46.206 Zero copy mechanism will not be used. 00:22:46.206 Running I/O for 2 seconds... 00:22:48.739 00:22:48.739 Latency(us) 00:22:48.739 [2024-12-15T13:36:54.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.739 [2024-12-15T13:36:54.429Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:48.739 nvme0n1 : 2.04 10212.16 1276.52 0.00 0.00 1535.19 621.85 41704.73 00:22:48.739 [2024-12-15T13:36:54.429Z] =================================================================================================================== 00:22:48.739 [2024-12-15T13:36:54.429Z] Total : 10212.16 1276.52 0.00 0.00 1535.19 621.85 41704.73 00:22:48.739 0 00:22:48.739 13:36:53 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:48.739 13:36:53 -- host/digest.sh@92 -- # get_accel_stats 00:22:48.739 13:36:53 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:48.739 13:36:53 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:48.739 | select(.opcode=="crc32c") 00:22:48.739 | "\(.module_name) \(.executed)"' 00:22:48.739 13:36:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:48.739 13:36:54 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:48.739 13:36:54 -- host/digest.sh@93 -- # exp_module=software 00:22:48.739 13:36:54 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:48.739 13:36:54 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:48.739 13:36:54 -- host/digest.sh@97 -- # killprocess 97411 00:22:48.739 13:36:54 -- common/autotest_common.sh@936 -- # '[' -z 97411 ']' 00:22:48.739 13:36:54 -- common/autotest_common.sh@940 -- # kill -0 97411 00:22:48.739 13:36:54 -- common/autotest_common.sh@941 -- # uname 00:22:48.740 13:36:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:48.740 13:36:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97411 00:22:48.740 killing process with pid 97411 00:22:48.740 Received shutdown signal, test time was about 2.000000 seconds 00:22:48.740 00:22:48.740 Latency(us) 00:22:48.740 [2024-12-15T13:36:54.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.740 [2024-12-15T13:36:54.430Z] =================================================================================================================== 00:22:48.740 [2024-12-15T13:36:54.430Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:48.740 13:36:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:48.740 13:36:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:48.740 13:36:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97411' 00:22:48.740 13:36:54 -- common/autotest_common.sh@955 -- # kill 97411 00:22:48.740 13:36:54 -- common/autotest_common.sh@960 -- # wait 97411 00:22:48.740 13:36:54 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:22:48.740 13:36:54 -- host/digest.sh@77 -- # local rw bs qd 00:22:48.740 13:36:54 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:48.740 13:36:54 -- host/digest.sh@80 -- # rw=randwrite 00:22:48.740 13:36:54 -- host/digest.sh@80 -- # bs=4096 00:22:48.740 13:36:54 -- host/digest.sh@80 -- # qd=128 00:22:48.740 13:36:54 -- host/digest.sh@82 -- # bperfpid=97488 00:22:48.740 13:36:54 -- host/digest.sh@83 -- # waitforlisten 97488 /var/tmp/bperf.sock 00:22:48.740 13:36:54 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:48.740 13:36:54 -- common/autotest_common.sh@829 -- # '[' -z 97488 ']' 00:22:48.740 13:36:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:48.740 13:36:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:48.740 13:36:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:48.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:48.740 13:36:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:48.740 13:36:54 -- common/autotest_common.sh@10 -- # set +x 00:22:48.740 [2024-12-15 13:36:54.428046] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:48.740 [2024-12-15 13:36:54.428147] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97488 ] 00:22:48.998 [2024-12-15 13:36:54.556901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.998 [2024-12-15 13:36:54.623323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.935 13:36:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:49.935 13:36:55 -- common/autotest_common.sh@862 -- # return 0 00:22:49.935 13:36:55 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:49.935 13:36:55 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:49.935 13:36:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:50.194 13:36:55 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:50.194 13:36:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:50.452 nvme0n1 00:22:50.452 13:36:56 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:50.452 13:36:56 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:50.711 Running I/O for 2 seconds... 00:22:52.649 00:22:52.649 Latency(us) 00:22:52.649 [2024-12-15T13:36:58.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.649 [2024-12-15T13:36:58.339Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:52.649 nvme0n1 : 2.00 27997.27 109.36 0.00 0.00 4567.28 1802.24 11677.32 00:22:52.649 [2024-12-15T13:36:58.339Z] =================================================================================================================== 00:22:52.649 [2024-12-15T13:36:58.339Z] Total : 27997.27 109.36 0.00 0.00 4567.28 1802.24 11677.32 00:22:52.649 0 00:22:52.649 13:36:58 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:52.649 13:36:58 -- host/digest.sh@92 -- # get_accel_stats 00:22:52.649 13:36:58 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:52.649 13:36:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:52.649 13:36:58 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:52.649 | select(.opcode=="crc32c") 00:22:52.649 | "\(.module_name) \(.executed)"' 00:22:52.908 13:36:58 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:52.908 13:36:58 -- host/digest.sh@93 -- # exp_module=software 00:22:52.908 13:36:58 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:52.908 13:36:58 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:52.908 13:36:58 -- host/digest.sh@97 -- # killprocess 97488 00:22:52.908 13:36:58 -- common/autotest_common.sh@936 -- # '[' -z 97488 ']' 00:22:52.908 13:36:58 -- common/autotest_common.sh@940 -- # kill -0 97488 00:22:52.908 13:36:58 -- common/autotest_common.sh@941 -- # uname 00:22:52.908 13:36:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:52.908 13:36:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97488 00:22:52.908 killing process with pid 97488 00:22:52.908 Received shutdown signal, test time was about 2.000000 seconds 00:22:52.908 00:22:52.908 Latency(us) 00:22:52.908 [2024-12-15T13:36:58.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.908 [2024-12-15T13:36:58.598Z] =================================================================================================================== 00:22:52.908 [2024-12-15T13:36:58.598Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:52.908 13:36:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:52.908 13:36:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:52.908 13:36:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97488' 00:22:52.908 13:36:58 -- common/autotest_common.sh@955 -- # kill 97488 00:22:52.908 13:36:58 -- common/autotest_common.sh@960 -- # wait 97488 00:22:53.167 13:36:58 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:22:53.167 13:36:58 -- host/digest.sh@77 -- # local rw bs qd 00:22:53.167 13:36:58 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:53.167 13:36:58 -- host/digest.sh@80 -- # rw=randwrite 00:22:53.167 13:36:58 -- host/digest.sh@80 -- # bs=131072 00:22:53.167 13:36:58 -- host/digest.sh@80 -- # qd=16 00:22:53.167 13:36:58 -- host/digest.sh@82 -- # bperfpid=97582 00:22:53.167 13:36:58 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:53.167 13:36:58 -- host/digest.sh@83 -- # waitforlisten 97582 /var/tmp/bperf.sock 00:22:53.167 13:36:58 -- common/autotest_common.sh@829 -- # '[' -z 97582 ']' 00:22:53.167 13:36:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:53.167 13:36:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:53.167 13:36:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:53.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:53.167 13:36:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:53.167 13:36:58 -- common/autotest_common.sh@10 -- # set +x 00:22:53.167 [2024-12-15 13:36:58.774538] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:53.167 [2024-12-15 13:36:58.774688] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefixI/O size of 131072 is greater than zero copy threshold (65536). 00:22:53.167 Zero copy mechanism will not be used. 00:22:53.167 =spdk_pid97582 ] 00:22:53.426 [2024-12-15 13:36:58.913085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.426 [2024-12-15 13:36:58.980347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.362 13:36:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:54.362 13:36:59 -- common/autotest_common.sh@862 -- # return 0 00:22:54.362 13:36:59 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:54.362 13:36:59 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:54.362 13:36:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:54.621 13:37:00 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:54.621 13:37:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:54.879 nvme0n1 00:22:54.879 13:37:00 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:54.879 13:37:00 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:54.879 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:54.879 Zero copy mechanism will not be used. 00:22:54.879 Running I/O for 2 seconds... 00:22:57.413 00:22:57.413 Latency(us) 00:22:57.413 [2024-12-15T13:37:03.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.413 [2024-12-15T13:37:03.103Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:57.413 nvme0n1 : 2.00 8735.57 1091.95 0.00 0.00 1827.46 1437.32 5749.29 00:22:57.413 [2024-12-15T13:37:03.103Z] =================================================================================================================== 00:22:57.413 [2024-12-15T13:37:03.103Z] Total : 8735.57 1091.95 0.00 0.00 1827.46 1437.32 5749.29 00:22:57.413 0 00:22:57.413 13:37:02 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:57.413 13:37:02 -- host/digest.sh@92 -- # get_accel_stats 00:22:57.413 13:37:02 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:57.413 13:37:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:57.413 13:37:02 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:57.413 | select(.opcode=="crc32c") 00:22:57.413 | "\(.module_name) \(.executed)"' 00:22:57.413 13:37:02 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:57.413 13:37:02 -- host/digest.sh@93 -- # exp_module=software 00:22:57.413 13:37:02 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:57.413 13:37:02 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:57.413 13:37:02 -- host/digest.sh@97 -- # killprocess 97582 00:22:57.413 13:37:02 -- common/autotest_common.sh@936 -- # '[' -z 97582 ']' 00:22:57.413 13:37:02 -- common/autotest_common.sh@940 -- # kill -0 97582 00:22:57.413 13:37:02 -- common/autotest_common.sh@941 -- # uname 00:22:57.413 13:37:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:57.413 13:37:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97582 00:22:57.413 killing process with pid 97582 00:22:57.413 Received shutdown signal, test time was about 2.000000 seconds 00:22:57.413 00:22:57.413 Latency(us) 00:22:57.413 [2024-12-15T13:37:03.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.413 [2024-12-15T13:37:03.103Z] =================================================================================================================== 00:22:57.413 [2024-12-15T13:37:03.103Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:57.413 13:37:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:57.413 13:37:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:57.413 13:37:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97582' 00:22:57.413 13:37:02 -- common/autotest_common.sh@955 -- # kill 97582 00:22:57.413 13:37:02 -- common/autotest_common.sh@960 -- # wait 97582 00:22:57.413 13:37:03 -- host/digest.sh@126 -- # killprocess 97284 00:22:57.413 13:37:03 -- common/autotest_common.sh@936 -- # '[' -z 97284 ']' 00:22:57.413 13:37:03 -- common/autotest_common.sh@940 -- # kill -0 97284 00:22:57.413 13:37:03 -- common/autotest_common.sh@941 -- # uname 00:22:57.413 13:37:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:57.413 13:37:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97284 00:22:57.413 killing process with pid 97284 00:22:57.413 13:37:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:57.413 13:37:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:57.413 13:37:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97284' 00:22:57.413 13:37:03 -- common/autotest_common.sh@955 -- # kill 97284 00:22:57.413 13:37:03 -- common/autotest_common.sh@960 -- # wait 97284 00:22:57.671 ************************************ 00:22:57.671 END TEST nvmf_digest_clean 00:22:57.671 ************************************ 00:22:57.671 00:22:57.671 real 0m17.278s 00:22:57.671 user 0m32.490s 00:22:57.671 sys 0m4.652s 00:22:57.671 13:37:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:57.671 13:37:03 -- common/autotest_common.sh@10 -- # set +x 00:22:57.671 13:37:03 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:22:57.671 13:37:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:57.671 13:37:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:57.671 13:37:03 -- common/autotest_common.sh@10 -- # set +x 00:22:57.671 ************************************ 00:22:57.671 START TEST nvmf_digest_error 00:22:57.671 ************************************ 00:22:57.671 13:37:03 -- common/autotest_common.sh@1114 -- # run_digest_error 00:22:57.671 13:37:03 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:22:57.671 13:37:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:57.671 13:37:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:57.671 13:37:03 -- common/autotest_common.sh@10 -- # set +x 00:22:57.671 13:37:03 -- nvmf/common.sh@469 -- # nvmfpid=97692 00:22:57.671 13:37:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:57.671 13:37:03 -- nvmf/common.sh@470 -- # waitforlisten 97692 00:22:57.671 13:37:03 -- common/autotest_common.sh@829 -- # '[' -z 97692 ']' 00:22:57.671 13:37:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.671 13:37:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:57.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.671 13:37:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.671 13:37:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:57.671 13:37:03 -- common/autotest_common.sh@10 -- # set +x 00:22:57.671 [2024-12-15 13:37:03.336865] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:57.671 [2024-12-15 13:37:03.336977] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.930 [2024-12-15 13:37:03.470049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.930 [2024-12-15 13:37:03.547836] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:57.930 [2024-12-15 13:37:03.548015] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.930 [2024-12-15 13:37:03.548027] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.930 [2024-12-15 13:37:03.548035] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.930 [2024-12-15 13:37:03.548065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.865 13:37:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:58.865 13:37:04 -- common/autotest_common.sh@862 -- # return 0 00:22:58.865 13:37:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:58.865 13:37:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:58.865 13:37:04 -- common/autotest_common.sh@10 -- # set +x 00:22:58.865 13:37:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.865 13:37:04 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:22:58.865 13:37:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.865 13:37:04 -- common/autotest_common.sh@10 -- # set +x 00:22:58.865 [2024-12-15 13:37:04.256557] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:22:58.866 13:37:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.866 13:37:04 -- host/digest.sh@104 -- # common_target_config 00:22:58.866 13:37:04 -- host/digest.sh@43 -- # rpc_cmd 00:22:58.866 13:37:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.866 13:37:04 -- common/autotest_common.sh@10 -- # set +x 00:22:58.866 null0 00:22:58.866 [2024-12-15 13:37:04.362926] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.866 [2024-12-15 13:37:04.387053] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.866 13:37:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.866 13:37:04 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:22:58.866 13:37:04 -- host/digest.sh@54 -- # local rw bs qd 00:22:58.866 13:37:04 -- host/digest.sh@56 -- # rw=randread 00:22:58.866 13:37:04 -- host/digest.sh@56 -- # bs=4096 00:22:58.866 13:37:04 -- host/digest.sh@56 -- # qd=128 00:22:58.866 13:37:04 -- host/digest.sh@58 -- # bperfpid=97736 00:22:58.866 13:37:04 -- host/digest.sh@60 -- # waitforlisten 97736 /var/tmp/bperf.sock 00:22:58.866 13:37:04 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:22:58.866 13:37:04 -- common/autotest_common.sh@829 -- # '[' -z 97736 ']' 00:22:58.866 13:37:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:58.866 13:37:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:58.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:58.866 13:37:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:58.866 13:37:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:58.866 13:37:04 -- common/autotest_common.sh@10 -- # set +x 00:22:58.866 [2024-12-15 13:37:04.451099] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:58.866 [2024-12-15 13:37:04.451194] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97736 ] 00:22:59.125 [2024-12-15 13:37:04.591882] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.125 [2024-12-15 13:37:04.653381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.692 13:37:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:59.692 13:37:05 -- common/autotest_common.sh@862 -- # return 0 00:22:59.692 13:37:05 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:59.692 13:37:05 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:59.950 13:37:05 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:59.950 13:37:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.950 13:37:05 -- common/autotest_common.sh@10 -- # set +x 00:22:59.950 13:37:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.950 13:37:05 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:59.950 13:37:05 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:00.209 nvme0n1 00:23:00.209 13:37:05 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:00.209 13:37:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.209 13:37:05 -- common/autotest_common.sh@10 -- # set +x 00:23:00.209 13:37:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.209 13:37:05 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:00.209 13:37:05 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:00.468 Running I/O for 2 seconds... 00:23:00.468 [2024-12-15 13:37:05.958942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.468 [2024-12-15 13:37:05.959017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.468 [2024-12-15 13:37:05.959030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.468 [2024-12-15 13:37:05.972369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.468 [2024-12-15 13:37:05.972420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.468 [2024-12-15 13:37:05.972438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.468 [2024-12-15 13:37:05.982950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.468 [2024-12-15 13:37:05.983000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.468 [2024-12-15 13:37:05.983012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.468 [2024-12-15 13:37:05.996166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.468 [2024-12-15 13:37:05.996217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.468 [2024-12-15 13:37:05.996229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.468 [2024-12-15 13:37:06.009300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.468 [2024-12-15 13:37:06.009348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.468 [2024-12-15 13:37:06.009360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.468 [2024-12-15 13:37:06.022014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.468 [2024-12-15 13:37:06.022063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.468 [2024-12-15 13:37:06.022074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.468 [2024-12-15 13:37:06.035051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.468 [2024-12-15 13:37:06.035100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.468 [2024-12-15 13:37:06.035111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.468 [2024-12-15 13:37:06.047949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.468 [2024-12-15 13:37:06.047997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.468 [2024-12-15 13:37:06.048008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.468 [2024-12-15 13:37:06.060055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.468 [2024-12-15 13:37:06.060104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.468 [2024-12-15 13:37:06.060115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.468 [2024-12-15 13:37:06.069623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.468 [2024-12-15 13:37:06.069671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.468 [2024-12-15 13:37:06.069682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.468 [2024-12-15 13:37:06.080925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.468 [2024-12-15 13:37:06.080974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.468 [2024-12-15 13:37:06.080985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.468 [2024-12-15 13:37:06.094545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.468 [2024-12-15 13:37:06.094611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.468 [2024-12-15 13:37:06.094625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.468 [2024-12-15 13:37:06.106568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.468 [2024-12-15 13:37:06.106628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.468 [2024-12-15 13:37:06.106640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.468 [2024-12-15 13:37:06.120061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.468 [2024-12-15 13:37:06.120091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.468 [2024-12-15 13:37:06.120103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.468 [2024-12-15 13:37:06.132021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.468 [2024-12-15 13:37:06.132070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.468 [2024-12-15 13:37:06.132081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.468 [2024-12-15 13:37:06.141936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.468 [2024-12-15 13:37:06.141997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.468 [2024-12-15 13:37:06.142009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.468 [2024-12-15 13:37:06.154401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.468 [2024-12-15 13:37:06.154448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.468 [2024-12-15 13:37:06.154459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.728 [2024-12-15 13:37:06.167637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.728 [2024-12-15 13:37:06.167683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.728 [2024-12-15 13:37:06.167695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.728 [2024-12-15 13:37:06.178798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.728 [2024-12-15 13:37:06.178856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.728 [2024-12-15 13:37:06.178867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.728 [2024-12-15 13:37:06.190612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.728 [2024-12-15 13:37:06.190657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.728 [2024-12-15 13:37:06.190670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.728 [2024-12-15 13:37:06.203243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.728 [2024-12-15 13:37:06.203278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.728 [2024-12-15 13:37:06.203289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.728 [2024-12-15 13:37:06.214957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.728 [2024-12-15 13:37:06.215007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.728 [2024-12-15 13:37:06.215018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.728 [2024-12-15 13:37:06.225884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.728 [2024-12-15 13:37:06.225948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.728 [2024-12-15 13:37:06.225959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.728 [2024-12-15 13:37:06.237902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.728 [2024-12-15 13:37:06.237952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.728 [2024-12-15 13:37:06.237963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.728 [2024-12-15 13:37:06.249173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.728 [2024-12-15 13:37:06.249221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.728 [2024-12-15 13:37:06.249232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.728 [2024-12-15 13:37:06.259892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.728 [2024-12-15 13:37:06.259940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.728 [2024-12-15 13:37:06.259950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.728 [2024-12-15 13:37:06.269016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.728 [2024-12-15 13:37:06.269066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.728 [2024-12-15 13:37:06.269077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.728 [2024-12-15 13:37:06.278852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.728 [2024-12-15 13:37:06.278902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.728 [2024-12-15 13:37:06.278913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.728 [2024-12-15 13:37:06.287511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.728 [2024-12-15 13:37:06.287556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.728 [2024-12-15 13:37:06.287567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.728 [2024-12-15 13:37:06.297414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.728 [2024-12-15 13:37:06.297464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.728 [2024-12-15 13:37:06.297474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.728 [2024-12-15 13:37:06.308819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.728 [2024-12-15 13:37:06.308868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.728 [2024-12-15 13:37:06.308879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.728 [2024-12-15 13:37:06.320997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.728 [2024-12-15 13:37:06.321048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.728 [2024-12-15 13:37:06.321059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.728 [2024-12-15 13:37:06.333780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.728 [2024-12-15 13:37:06.333831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.728 [2024-12-15 13:37:06.333843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.728 [2024-12-15 13:37:06.345949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.728 [2024-12-15 13:37:06.345997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.728 [2024-12-15 13:37:06.346024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.728 [2024-12-15 13:37:06.354061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.728 [2024-12-15 13:37:06.354109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.728 [2024-12-15 13:37:06.354120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.728 [2024-12-15 13:37:06.366559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.728 [2024-12-15 13:37:06.366616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.728 [2024-12-15 13:37:06.366628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.728 [2024-12-15 13:37:06.378038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.728 [2024-12-15 13:37:06.378085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.728 [2024-12-15 13:37:06.378097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.728 [2024-12-15 13:37:06.387406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.728 [2024-12-15 13:37:06.387455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.728 [2024-12-15 13:37:06.387466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.728 [2024-12-15 13:37:06.397036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.728 [2024-12-15 13:37:06.397085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.728 [2024-12-15 13:37:06.397096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.728 [2024-12-15 13:37:06.406576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.728 [2024-12-15 13:37:06.406634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.729 [2024-12-15 13:37:06.406646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.988 [2024-12-15 13:37:06.416199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.988 [2024-12-15 13:37:06.416247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-12-15 13:37:06.416257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.988 [2024-12-15 13:37:06.426026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.988 [2024-12-15 13:37:06.426074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-12-15 13:37:06.426085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.988 [2024-12-15 13:37:06.435263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.988 [2024-12-15 13:37:06.435311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-12-15 13:37:06.435322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.988 [2024-12-15 13:37:06.446815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.988 [2024-12-15 13:37:06.446863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-12-15 13:37:06.446875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.988 [2024-12-15 13:37:06.458862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.988 [2024-12-15 13:37:06.458910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-12-15 13:37:06.458937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.988 [2024-12-15 13:37:06.467916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.988 [2024-12-15 13:37:06.467964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-12-15 13:37:06.467975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.988 [2024-12-15 13:37:06.481054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.988 [2024-12-15 13:37:06.481102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-12-15 13:37:06.481113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.988 [2024-12-15 13:37:06.496102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.988 [2024-12-15 13:37:06.496149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-12-15 13:37:06.496177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.988 [2024-12-15 13:37:06.509228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.988 [2024-12-15 13:37:06.509276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.988 [2024-12-15 13:37:06.509287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.988 [2024-12-15 13:37:06.522568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.989 [2024-12-15 13:37:06.522644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-12-15 13:37:06.522656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.989 [2024-12-15 13:37:06.535684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.989 [2024-12-15 13:37:06.535733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-12-15 13:37:06.535744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.989 [2024-12-15 13:37:06.547419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.989 [2024-12-15 13:37:06.547468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-12-15 13:37:06.547478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.989 [2024-12-15 13:37:06.559735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.989 [2024-12-15 13:37:06.559791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-12-15 13:37:06.559802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.989 [2024-12-15 13:37:06.568380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.989 [2024-12-15 13:37:06.568427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-12-15 13:37:06.568438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.989 [2024-12-15 13:37:06.584012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.989 [2024-12-15 13:37:06.584062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-12-15 13:37:06.584073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.989 [2024-12-15 13:37:06.592979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.989 [2024-12-15 13:37:06.593011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-12-15 13:37:06.593022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.989 [2024-12-15 13:37:06.604923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.989 [2024-12-15 13:37:06.604972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-12-15 13:37:06.604999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.989 [2024-12-15 13:37:06.614493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.989 [2024-12-15 13:37:06.614541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-12-15 13:37:06.614551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.989 [2024-12-15 13:37:06.624205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.989 [2024-12-15 13:37:06.624253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-12-15 13:37:06.624264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.989 [2024-12-15 13:37:06.635096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.989 [2024-12-15 13:37:06.635143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-12-15 13:37:06.635154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.989 [2024-12-15 13:37:06.647093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.989 [2024-12-15 13:37:06.647141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-12-15 13:37:06.647151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.989 [2024-12-15 13:37:06.658704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.989 [2024-12-15 13:37:06.658753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-12-15 13:37:06.658764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:00.989 [2024-12-15 13:37:06.667565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:00.989 [2024-12-15 13:37:06.667621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.989 [2024-12-15 13:37:06.667633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.249 [2024-12-15 13:37:06.679217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.249 [2024-12-15 13:37:06.679264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.249 [2024-12-15 13:37:06.679275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.249 [2024-12-15 13:37:06.687709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.249 [2024-12-15 13:37:06.687755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.249 [2024-12-15 13:37:06.687767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.249 [2024-12-15 13:37:06.701067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.249 [2024-12-15 13:37:06.701115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.249 [2024-12-15 13:37:06.701126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.249 [2024-12-15 13:37:06.712725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.249 [2024-12-15 13:37:06.712773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.249 [2024-12-15 13:37:06.712784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.249 [2024-12-15 13:37:06.726042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.249 [2024-12-15 13:37:06.726103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.249 [2024-12-15 13:37:06.726114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.249 [2024-12-15 13:37:06.735063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.249 [2024-12-15 13:37:06.735113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.249 [2024-12-15 13:37:06.735123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.249 [2024-12-15 13:37:06.744581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.249 [2024-12-15 13:37:06.744660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.249 [2024-12-15 13:37:06.744671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.249 [2024-12-15 13:37:06.757583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.249 [2024-12-15 13:37:06.757641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.249 [2024-12-15 13:37:06.757653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.249 [2024-12-15 13:37:06.769087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.249 [2024-12-15 13:37:06.769135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.249 [2024-12-15 13:37:06.769145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.249 [2024-12-15 13:37:06.778396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.249 [2024-12-15 13:37:06.778443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.249 [2024-12-15 13:37:06.778454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.249 [2024-12-15 13:37:06.790492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.249 [2024-12-15 13:37:06.790541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.249 [2024-12-15 13:37:06.790552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.249 [2024-12-15 13:37:06.803382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.249 [2024-12-15 13:37:06.803430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.249 [2024-12-15 13:37:06.803441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.249 [2024-12-15 13:37:06.815428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.249 [2024-12-15 13:37:06.815476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.249 [2024-12-15 13:37:06.815487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.249 [2024-12-15 13:37:06.827845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.249 [2024-12-15 13:37:06.827894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.249 [2024-12-15 13:37:06.827905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.249 [2024-12-15 13:37:06.839319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.249 [2024-12-15 13:37:06.839367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.249 [2024-12-15 13:37:06.839378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.249 [2024-12-15 13:37:06.848389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.249 [2024-12-15 13:37:06.848437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.249 [2024-12-15 13:37:06.848448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.249 [2024-12-15 13:37:06.857699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.249 [2024-12-15 13:37:06.857747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.249 [2024-12-15 13:37:06.857758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.249 [2024-12-15 13:37:06.870434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.249 [2024-12-15 13:37:06.870482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.249 [2024-12-15 13:37:06.870493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.249 [2024-12-15 13:37:06.882835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.249 [2024-12-15 13:37:06.882882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.249 [2024-12-15 13:37:06.882894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.249 [2024-12-15 13:37:06.894939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.249 [2024-12-15 13:37:06.894989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.249 [2024-12-15 13:37:06.895016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.249 [2024-12-15 13:37:06.907880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.249 [2024-12-15 13:37:06.907927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.249 [2024-12-15 13:37:06.907938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.249 [2024-12-15 13:37:06.916085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.249 [2024-12-15 13:37:06.916134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.249 [2024-12-15 13:37:06.916145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.250 [2024-12-15 13:37:06.927686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.250 [2024-12-15 13:37:06.927733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.250 [2024-12-15 13:37:06.927745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.509 [2024-12-15 13:37:06.938853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.509 [2024-12-15 13:37:06.938901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.509 [2024-12-15 13:37:06.938912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.509 [2024-12-15 13:37:06.949246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.509 [2024-12-15 13:37:06.949295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.509 [2024-12-15 13:37:06.949305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.509 [2024-12-15 13:37:06.959321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.509 [2024-12-15 13:37:06.959369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.509 [2024-12-15 13:37:06.959380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.509 [2024-12-15 13:37:06.968833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.509 [2024-12-15 13:37:06.968864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.509 [2024-12-15 13:37:06.968875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.509 [2024-12-15 13:37:06.978792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.509 [2024-12-15 13:37:06.978837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.509 [2024-12-15 13:37:06.978847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.509 [2024-12-15 13:37:06.989077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.509 [2024-12-15 13:37:06.989126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.509 [2024-12-15 13:37:06.989137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.509 [2024-12-15 13:37:06.998507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.509 [2024-12-15 13:37:06.998556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.509 [2024-12-15 13:37:06.998568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.509 [2024-12-15 13:37:07.007730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.509 [2024-12-15 13:37:07.007777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.509 [2024-12-15 13:37:07.007788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.509 [2024-12-15 13:37:07.016899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.509 [2024-12-15 13:37:07.016947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.509 [2024-12-15 13:37:07.016958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.509 [2024-12-15 13:37:07.026311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.509 [2024-12-15 13:37:07.026358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.509 [2024-12-15 13:37:07.026369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.509 [2024-12-15 13:37:07.036072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.509 [2024-12-15 13:37:07.036120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.509 [2024-12-15 13:37:07.036131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.509 [2024-12-15 13:37:07.048447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.509 [2024-12-15 13:37:07.048496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.509 [2024-12-15 13:37:07.048507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.509 [2024-12-15 13:37:07.059403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.509 [2024-12-15 13:37:07.059461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.509 [2024-12-15 13:37:07.059472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.509 [2024-12-15 13:37:07.071879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.509 [2024-12-15 13:37:07.071927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.509 [2024-12-15 13:37:07.071938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.509 [2024-12-15 13:37:07.083217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.509 [2024-12-15 13:37:07.083264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.509 [2024-12-15 13:37:07.083275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.509 [2024-12-15 13:37:07.092940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.509 [2024-12-15 13:37:07.092988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.509 [2024-12-15 13:37:07.092999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.509 [2024-12-15 13:37:07.104347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.509 [2024-12-15 13:37:07.104398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.509 [2024-12-15 13:37:07.104409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.509 [2024-12-15 13:37:07.117926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.509 [2024-12-15 13:37:07.117974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.509 [2024-12-15 13:37:07.117985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.509 [2024-12-15 13:37:07.130914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.510 [2024-12-15 13:37:07.130964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.510 [2024-12-15 13:37:07.130975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.510 [2024-12-15 13:37:07.142935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.510 [2024-12-15 13:37:07.142985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.510 [2024-12-15 13:37:07.142996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.510 [2024-12-15 13:37:07.154131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.510 [2024-12-15 13:37:07.154178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.510 [2024-12-15 13:37:07.154190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.510 [2024-12-15 13:37:07.167763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.510 [2024-12-15 13:37:07.167812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.510 [2024-12-15 13:37:07.167824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.510 [2024-12-15 13:37:07.178444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.510 [2024-12-15 13:37:07.178491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.510 [2024-12-15 13:37:07.178502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.510 [2024-12-15 13:37:07.187405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.510 [2024-12-15 13:37:07.187465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.510 [2024-12-15 13:37:07.187476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.769 [2024-12-15 13:37:07.199346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.769 [2024-12-15 13:37:07.199394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.769 [2024-12-15 13:37:07.199405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.769 [2024-12-15 13:37:07.210979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.769 [2024-12-15 13:37:07.211039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.769 [2024-12-15 13:37:07.211050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.769 [2024-12-15 13:37:07.223684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.769 [2024-12-15 13:37:07.223751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.769 [2024-12-15 13:37:07.223762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.769 [2024-12-15 13:37:07.234050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.769 [2024-12-15 13:37:07.234110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.769 [2024-12-15 13:37:07.234121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.770 [2024-12-15 13:37:07.243900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.770 [2024-12-15 13:37:07.243949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.770 [2024-12-15 13:37:07.243960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.770 [2024-12-15 13:37:07.255628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.770 [2024-12-15 13:37:07.255676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.770 [2024-12-15 13:37:07.255687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.770 [2024-12-15 13:37:07.265811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.770 [2024-12-15 13:37:07.265860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.770 [2024-12-15 13:37:07.265886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.770 [2024-12-15 13:37:07.276144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.770 [2024-12-15 13:37:07.276193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.770 [2024-12-15 13:37:07.276204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.770 [2024-12-15 13:37:07.287202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.770 [2024-12-15 13:37:07.287249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.770 [2024-12-15 13:37:07.287261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.770 [2024-12-15 13:37:07.298034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.770 [2024-12-15 13:37:07.298083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.770 [2024-12-15 13:37:07.298094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.770 [2024-12-15 13:37:07.310999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.770 [2024-12-15 13:37:07.311048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.770 [2024-12-15 13:37:07.311059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.770 [2024-12-15 13:37:07.324111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.770 [2024-12-15 13:37:07.324160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.770 [2024-12-15 13:37:07.324186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.770 [2024-12-15 13:37:07.337891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.770 [2024-12-15 13:37:07.337940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.770 [2024-12-15 13:37:07.337951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.770 [2024-12-15 13:37:07.349678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.770 [2024-12-15 13:37:07.349726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.770 [2024-12-15 13:37:07.349737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.770 [2024-12-15 13:37:07.360291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.770 [2024-12-15 13:37:07.360339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.770 [2024-12-15 13:37:07.360350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.770 [2024-12-15 13:37:07.372643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.770 [2024-12-15 13:37:07.372691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.770 [2024-12-15 13:37:07.372702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.770 [2024-12-15 13:37:07.384362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.770 [2024-12-15 13:37:07.384410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.770 [2024-12-15 13:37:07.384421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.770 [2024-12-15 13:37:07.395139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.770 [2024-12-15 13:37:07.395186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.770 [2024-12-15 13:37:07.395197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.770 [2024-12-15 13:37:07.408174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.770 [2024-12-15 13:37:07.408220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.770 [2024-12-15 13:37:07.408231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.770 [2024-12-15 13:37:07.418179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.770 [2024-12-15 13:37:07.418232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.770 [2024-12-15 13:37:07.418243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.770 [2024-12-15 13:37:07.426526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.770 [2024-12-15 13:37:07.426574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.770 [2024-12-15 13:37:07.426585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.770 [2024-12-15 13:37:07.438554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.770 [2024-12-15 13:37:07.438611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.770 [2024-12-15 13:37:07.438623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:01.771 [2024-12-15 13:37:07.450789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:01.771 [2024-12-15 13:37:07.450837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.771 [2024-12-15 13:37:07.450848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.030 [2024-12-15 13:37:07.462670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.030 [2024-12-15 13:37:07.462720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.030 [2024-12-15 13:37:07.462730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.030 [2024-12-15 13:37:07.475446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.030 [2024-12-15 13:37:07.475507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.030 [2024-12-15 13:37:07.475518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.030 [2024-12-15 13:37:07.487633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.030 [2024-12-15 13:37:07.487680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.030 [2024-12-15 13:37:07.487691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.030 [2024-12-15 13:37:07.499080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.030 [2024-12-15 13:37:07.499129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.030 [2024-12-15 13:37:07.499140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.030 [2024-12-15 13:37:07.510134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.030 [2024-12-15 13:37:07.510182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.030 [2024-12-15 13:37:07.510194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.030 [2024-12-15 13:37:07.523769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.030 [2024-12-15 13:37:07.523822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.030 [2024-12-15 13:37:07.523835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.030 [2024-12-15 13:37:07.534176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.030 [2024-12-15 13:37:07.534225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.030 [2024-12-15 13:37:07.534236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.030 [2024-12-15 13:37:07.544494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.030 [2024-12-15 13:37:07.544542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.030 [2024-12-15 13:37:07.544553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.030 [2024-12-15 13:37:07.554340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.030 [2024-12-15 13:37:07.554388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.030 [2024-12-15 13:37:07.554399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.030 [2024-12-15 13:37:07.565688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.031 [2024-12-15 13:37:07.565736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.031 [2024-12-15 13:37:07.565747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.031 [2024-12-15 13:37:07.576217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.031 [2024-12-15 13:37:07.576265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.031 [2024-12-15 13:37:07.576276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.031 [2024-12-15 13:37:07.586704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.031 [2024-12-15 13:37:07.586752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.031 [2024-12-15 13:37:07.586762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.031 [2024-12-15 13:37:07.598741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.031 [2024-12-15 13:37:07.598788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.031 [2024-12-15 13:37:07.598798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.031 [2024-12-15 13:37:07.607344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.031 [2024-12-15 13:37:07.607391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.031 [2024-12-15 13:37:07.607401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.031 [2024-12-15 13:37:07.619321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.031 [2024-12-15 13:37:07.619369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.031 [2024-12-15 13:37:07.619380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.031 [2024-12-15 13:37:07.632021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.031 [2024-12-15 13:37:07.632069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.031 [2024-12-15 13:37:07.632080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.031 [2024-12-15 13:37:07.644174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.031 [2024-12-15 13:37:07.644222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.031 [2024-12-15 13:37:07.644233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.031 [2024-12-15 13:37:07.657014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.031 [2024-12-15 13:37:07.657062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.031 [2024-12-15 13:37:07.657073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.031 [2024-12-15 13:37:07.669188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.031 [2024-12-15 13:37:07.669238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.031 [2024-12-15 13:37:07.669248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.031 [2024-12-15 13:37:07.680699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.031 [2024-12-15 13:37:07.680746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.031 [2024-12-15 13:37:07.680757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.031 [2024-12-15 13:37:07.689938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.031 [2024-12-15 13:37:07.689996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.031 [2024-12-15 13:37:07.690007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.031 [2024-12-15 13:37:07.701949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.031 [2024-12-15 13:37:07.701996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.031 [2024-12-15 13:37:07.702006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.031 [2024-12-15 13:37:07.714992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.031 [2024-12-15 13:37:07.715040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.031 [2024-12-15 13:37:07.715051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.291 [2024-12-15 13:37:07.727677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.291 [2024-12-15 13:37:07.727725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.291 [2024-12-15 13:37:07.727736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.291 [2024-12-15 13:37:07.740751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.291 [2024-12-15 13:37:07.740798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.291 [2024-12-15 13:37:07.740809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.291 [2024-12-15 13:37:07.753000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.291 [2024-12-15 13:37:07.753047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.291 [2024-12-15 13:37:07.753058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.291 [2024-12-15 13:37:07.762551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.291 [2024-12-15 13:37:07.762598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.291 [2024-12-15 13:37:07.762619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.291 [2024-12-15 13:37:07.770740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.291 [2024-12-15 13:37:07.770786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.291 [2024-12-15 13:37:07.770798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.291 [2024-12-15 13:37:07.780833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.291 [2024-12-15 13:37:07.780881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.291 [2024-12-15 13:37:07.780891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.291 [2024-12-15 13:37:07.790394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.291 [2024-12-15 13:37:07.790441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.291 [2024-12-15 13:37:07.790452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.291 [2024-12-15 13:37:07.800616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.291 [2024-12-15 13:37:07.800674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.291 [2024-12-15 13:37:07.800685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.291 [2024-12-15 13:37:07.811448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.291 [2024-12-15 13:37:07.811495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.291 [2024-12-15 13:37:07.811506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.291 [2024-12-15 13:37:07.821200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.291 [2024-12-15 13:37:07.821248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.291 [2024-12-15 13:37:07.821259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.291 [2024-12-15 13:37:07.832228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.291 [2024-12-15 13:37:07.832276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.291 [2024-12-15 13:37:07.832287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.291 [2024-12-15 13:37:07.845517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.291 [2024-12-15 13:37:07.845588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.291 [2024-12-15 13:37:07.846173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.291 [2024-12-15 13:37:07.855310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.291 [2024-12-15 13:37:07.855358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.291 [2024-12-15 13:37:07.855368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.291 [2024-12-15 13:37:07.867386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.291 [2024-12-15 13:37:07.867434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.291 [2024-12-15 13:37:07.867445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.291 [2024-12-15 13:37:07.879638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.291 [2024-12-15 13:37:07.879686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.291 [2024-12-15 13:37:07.879698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.292 [2024-12-15 13:37:07.891540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.292 [2024-12-15 13:37:07.891588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.292 [2024-12-15 13:37:07.891624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.292 [2024-12-15 13:37:07.903121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.292 [2024-12-15 13:37:07.903169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.292 [2024-12-15 13:37:07.903179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.292 [2024-12-15 13:37:07.913156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.292 [2024-12-15 13:37:07.913204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.292 [2024-12-15 13:37:07.913214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.292 [2024-12-15 13:37:07.922541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.292 [2024-12-15 13:37:07.922589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.292 [2024-12-15 13:37:07.922608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.292 [2024-12-15 13:37:07.933414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.292 [2024-12-15 13:37:07.933462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.292 [2024-12-15 13:37:07.933473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.292 [2024-12-15 13:37:07.942553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x215c8d0) 00:23:02.292 [2024-12-15 13:37:07.942611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.292 [2024-12-15 13:37:07.942623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.292 00:23:02.292 Latency(us) 00:23:02.292 [2024-12-15T13:37:07.982Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.292 [2024-12-15T13:37:07.982Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:02.292 nvme0n1 : 2.00 22586.02 88.23 0.00 0.00 5662.16 2546.97 20137.43 00:23:02.292 [2024-12-15T13:37:07.982Z] =================================================================================================================== 00:23:02.292 [2024-12-15T13:37:07.982Z] Total : 22586.02 88.23 0.00 0.00 5662.16 2546.97 20137.43 00:23:02.292 0 00:23:02.292 13:37:07 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:02.292 13:37:07 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:02.292 13:37:07 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:02.292 13:37:07 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:02.292 | .driver_specific 00:23:02.292 | .nvme_error 00:23:02.292 | .status_code 00:23:02.292 | .command_transient_transport_error' 00:23:02.551 13:37:08 -- host/digest.sh@71 -- # (( 177 > 0 )) 00:23:02.551 13:37:08 -- host/digest.sh@73 -- # killprocess 97736 00:23:02.551 13:37:08 -- common/autotest_common.sh@936 -- # '[' -z 97736 ']' 00:23:02.551 13:37:08 -- common/autotest_common.sh@940 -- # kill -0 97736 00:23:02.551 13:37:08 -- common/autotest_common.sh@941 -- # uname 00:23:02.551 13:37:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:02.551 13:37:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97736 00:23:02.809 13:37:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:02.809 killing process with pid 97736 00:23:02.809 13:37:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:02.809 13:37:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97736' 00:23:02.809 Received shutdown signal, test time was about 2.000000 seconds 00:23:02.809 00:23:02.809 Latency(us) 00:23:02.809 [2024-12-15T13:37:08.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.809 [2024-12-15T13:37:08.499Z] =================================================================================================================== 00:23:02.809 [2024-12-15T13:37:08.499Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:02.809 13:37:08 -- common/autotest_common.sh@955 -- # kill 97736 00:23:02.809 13:37:08 -- common/autotest_common.sh@960 -- # wait 97736 00:23:02.809 13:37:08 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:23:02.809 13:37:08 -- host/digest.sh@54 -- # local rw bs qd 00:23:02.809 13:37:08 -- host/digest.sh@56 -- # rw=randread 00:23:02.809 13:37:08 -- host/digest.sh@56 -- # bs=131072 00:23:02.809 13:37:08 -- host/digest.sh@56 -- # qd=16 00:23:02.809 13:37:08 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:23:02.809 13:37:08 -- host/digest.sh@58 -- # bperfpid=97825 00:23:02.809 13:37:08 -- host/digest.sh@60 -- # waitforlisten 97825 /var/tmp/bperf.sock 00:23:02.809 13:37:08 -- common/autotest_common.sh@829 -- # '[' -z 97825 ']' 00:23:02.809 13:37:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:02.809 13:37:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:02.809 13:37:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:02.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:02.810 13:37:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:02.810 13:37:08 -- common/autotest_common.sh@10 -- # set +x 00:23:03.073 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:03.073 Zero copy mechanism will not be used. 00:23:03.073 [2024-12-15 13:37:08.503000] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:03.073 [2024-12-15 13:37:08.503086] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97825 ] 00:23:03.073 [2024-12-15 13:37:08.634538] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.073 [2024-12-15 13:37:08.693501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.034 13:37:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:04.034 13:37:09 -- common/autotest_common.sh@862 -- # return 0 00:23:04.034 13:37:09 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:04.034 13:37:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:04.293 13:37:09 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:04.293 13:37:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.293 13:37:09 -- common/autotest_common.sh@10 -- # set +x 00:23:04.293 13:37:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.293 13:37:09 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:04.293 13:37:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:04.551 nvme0n1 00:23:04.551 13:37:10 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:04.551 13:37:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.551 13:37:10 -- common/autotest_common.sh@10 -- # set +x 00:23:04.551 13:37:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.551 13:37:10 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:04.551 13:37:10 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:04.551 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:04.551 Zero copy mechanism will not be used. 00:23:04.551 Running I/O for 2 seconds... 00:23:04.551 [2024-12-15 13:37:10.236414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.551 [2024-12-15 13:37:10.236472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.551 [2024-12-15 13:37:10.236485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.551 [2024-12-15 13:37:10.240101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.551 [2024-12-15 13:37:10.240149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.551 [2024-12-15 13:37:10.240162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.820 [2024-12-15 13:37:10.243576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.820 [2024-12-15 13:37:10.243635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.243647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.246580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.246654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.246667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.249837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.249886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.249913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.253443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.253491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.253503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.256977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.257040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.257053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.260841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.260889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.260901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.264362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.264419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.264431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.267984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.268033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.268045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.271081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.271130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.271141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.274844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.274895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.274907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.277437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.277483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.277495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.280894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.280941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.280984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.284102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.284148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.284159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.287221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.287270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.287282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.290776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.290826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.290838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.293726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.293775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.293787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.296625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.296670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.296681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.299408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.299457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.299468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.302923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.302972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.302983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.305957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.306004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.306016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.308930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.308976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.308987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.311929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.311974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.311985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.315421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.315470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.315481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.319208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.319257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.319269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.322717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.322767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.322779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.326178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.326227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.326238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.329670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.329724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.329735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.333043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.333089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.333101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.336342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.336390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.336401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.339383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.339432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.339444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.342928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.342984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.343012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.346180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.346229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.346241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.349117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.349163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.349175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.353032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.353080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.353091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.821 [2024-12-15 13:37:10.356277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.821 [2024-12-15 13:37:10.356323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.821 [2024-12-15 13:37:10.356334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.359365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.822 [2024-12-15 13:37:10.359411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.822 [2024-12-15 13:37:10.359422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.362937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.822 [2024-12-15 13:37:10.362987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.822 [2024-12-15 13:37:10.363015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.366360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.822 [2024-12-15 13:37:10.366408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.822 [2024-12-15 13:37:10.366420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.369645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.822 [2024-12-15 13:37:10.369680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.822 [2024-12-15 13:37:10.369691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.373101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.822 [2024-12-15 13:37:10.373148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.822 [2024-12-15 13:37:10.373160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.376552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.822 [2024-12-15 13:37:10.376595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.822 [2024-12-15 13:37:10.376609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.379645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.822 [2024-12-15 13:37:10.379691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.822 [2024-12-15 13:37:10.379703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.382647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.822 [2024-12-15 13:37:10.382695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.822 [2024-12-15 13:37:10.382706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.385893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.822 [2024-12-15 13:37:10.385958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.822 [2024-12-15 13:37:10.385969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.389114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.822 [2024-12-15 13:37:10.389160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.822 [2024-12-15 13:37:10.389172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.391915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.822 [2024-12-15 13:37:10.391961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.822 [2024-12-15 13:37:10.391973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.395124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.822 [2024-12-15 13:37:10.395171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.822 [2024-12-15 13:37:10.395182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.398275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.822 [2024-12-15 13:37:10.398323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.822 [2024-12-15 13:37:10.398335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.401478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.822 [2024-12-15 13:37:10.401525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.822 [2024-12-15 13:37:10.401577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.404955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.822 [2024-12-15 13:37:10.405002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.822 [2024-12-15 13:37:10.405013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.408052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.822 [2024-12-15 13:37:10.408098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.822 [2024-12-15 13:37:10.408110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.411131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.822 [2024-12-15 13:37:10.411178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.822 [2024-12-15 13:37:10.411190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.414496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.822 [2024-12-15 13:37:10.414545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.822 [2024-12-15 13:37:10.414556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.417515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.822 [2024-12-15 13:37:10.417596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.822 [2024-12-15 13:37:10.417621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.420535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.822 [2024-12-15 13:37:10.420580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.822 [2024-12-15 13:37:10.420591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.424399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.822 [2024-12-15 13:37:10.424432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.822 [2024-12-15 13:37:10.424444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.427886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.822 [2024-12-15 13:37:10.427921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.822 [2024-12-15 13:37:10.427933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.431203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.822 [2024-12-15 13:37:10.431251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.822 [2024-12-15 13:37:10.431262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.434291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.822 [2024-12-15 13:37:10.434341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.822 [2024-12-15 13:37:10.434352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.437687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.822 [2024-12-15 13:37:10.437720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.822 [2024-12-15 13:37:10.437732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.440971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.822 [2024-12-15 13:37:10.441023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.822 [2024-12-15 13:37:10.441034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.444294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.822 [2024-12-15 13:37:10.444326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.822 [2024-12-15 13:37:10.444337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.447051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.822 [2024-12-15 13:37:10.447109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.822 [2024-12-15 13:37:10.447121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.449996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.822 [2024-12-15 13:37:10.450044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.822 [2024-12-15 13:37:10.450056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.453082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.822 [2024-12-15 13:37:10.453127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.822 [2024-12-15 13:37:10.453140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.456688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.822 [2024-12-15 13:37:10.456733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.822 [2024-12-15 13:37:10.456745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.460566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.822 [2024-12-15 13:37:10.460626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.822 [2024-12-15 13:37:10.460638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.822 [2024-12-15 13:37:10.464015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.823 [2024-12-15 13:37:10.464063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.823 [2024-12-15 13:37:10.464075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.823 [2024-12-15 13:37:10.466901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.823 [2024-12-15 13:37:10.466949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.823 [2024-12-15 13:37:10.466960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.823 [2024-12-15 13:37:10.470295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.823 [2024-12-15 13:37:10.470343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.823 [2024-12-15 13:37:10.470355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.823 [2024-12-15 13:37:10.473986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.823 [2024-12-15 13:37:10.474034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.823 [2024-12-15 13:37:10.474046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.823 [2024-12-15 13:37:10.477222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.823 [2024-12-15 13:37:10.477269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.823 [2024-12-15 13:37:10.477280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.823 [2024-12-15 13:37:10.480728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.823 [2024-12-15 13:37:10.480773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.823 [2024-12-15 13:37:10.480784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.823 [2024-12-15 13:37:10.483796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.823 [2024-12-15 13:37:10.483843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.823 [2024-12-15 13:37:10.483856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.823 [2024-12-15 13:37:10.486837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.823 [2024-12-15 13:37:10.486885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.823 [2024-12-15 13:37:10.486897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.823 [2024-12-15 13:37:10.489938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.823 [2024-12-15 13:37:10.489986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.823 [2024-12-15 13:37:10.489998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.823 [2024-12-15 13:37:10.493331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.823 [2024-12-15 13:37:10.493379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.823 [2024-12-15 13:37:10.493391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:04.823 [2024-12-15 13:37:10.496861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.823 [2024-12-15 13:37:10.496907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.823 [2024-12-15 13:37:10.496919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:04.823 [2024-12-15 13:37:10.500030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.823 [2024-12-15 13:37:10.500076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.823 [2024-12-15 13:37:10.500088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:04.823 [2024-12-15 13:37:10.503208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.823 [2024-12-15 13:37:10.503256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.823 [2024-12-15 13:37:10.503267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:04.823 [2024-12-15 13:37:10.506414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:04.823 [2024-12-15 13:37:10.506472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.823 [2024-12-15 13:37:10.506483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.083 [2024-12-15 13:37:10.509535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.083 [2024-12-15 13:37:10.509611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.083 [2024-12-15 13:37:10.509639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.083 [2024-12-15 13:37:10.512706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.083 [2024-12-15 13:37:10.512750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.083 [2024-12-15 13:37:10.512762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.083 [2024-12-15 13:37:10.516137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.083 [2024-12-15 13:37:10.516184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.083 [2024-12-15 13:37:10.516195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.083 [2024-12-15 13:37:10.519424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.083 [2024-12-15 13:37:10.519472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.083 [2024-12-15 13:37:10.519484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.083 [2024-12-15 13:37:10.522802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.083 [2024-12-15 13:37:10.522851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.083 [2024-12-15 13:37:10.522862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.083 [2024-12-15 13:37:10.525075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.083 [2024-12-15 13:37:10.525120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.083 [2024-12-15 13:37:10.525132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.083 [2024-12-15 13:37:10.528289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.084 [2024-12-15 13:37:10.528335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.084 [2024-12-15 13:37:10.528346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.084 [2024-12-15 13:37:10.532120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.084 [2024-12-15 13:37:10.532168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.084 [2024-12-15 13:37:10.532179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.084 [2024-12-15 13:37:10.535214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.084 [2024-12-15 13:37:10.535262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.084 [2024-12-15 13:37:10.535273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.084 [2024-12-15 13:37:10.538120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.084 [2024-12-15 13:37:10.538169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.084 [2024-12-15 13:37:10.538180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.084 [2024-12-15 13:37:10.541055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.084 [2024-12-15 13:37:10.541100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.084 [2024-12-15 13:37:10.541111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.084 [2024-12-15 13:37:10.544831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.084 [2024-12-15 13:37:10.544861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.084 [2024-12-15 13:37:10.544872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.084 [2024-12-15 13:37:10.548450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.084 [2024-12-15 13:37:10.548481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.084 [2024-12-15 13:37:10.548493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.084 [2024-12-15 13:37:10.551989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.084 [2024-12-15 13:37:10.552052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.084 [2024-12-15 13:37:10.552063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.084 [2024-12-15 13:37:10.555263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.084 [2024-12-15 13:37:10.555315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.084 [2024-12-15 13:37:10.555327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.084 [2024-12-15 13:37:10.559196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.084 [2024-12-15 13:37:10.559244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.084 [2024-12-15 13:37:10.559255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.084 [2024-12-15 13:37:10.562199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.084 [2024-12-15 13:37:10.562246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.084 [2024-12-15 13:37:10.562258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.084 [2024-12-15 13:37:10.565368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.084 [2024-12-15 13:37:10.565415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.084 [2024-12-15 13:37:10.565427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.084 [2024-12-15 13:37:10.569205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.084 [2024-12-15 13:37:10.569253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.084 [2024-12-15 13:37:10.569265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.084 [2024-12-15 13:37:10.572920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.084 [2024-12-15 13:37:10.572984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.084 [2024-12-15 13:37:10.572996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.084 [2024-12-15 13:37:10.577315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.084 [2024-12-15 13:37:10.577364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.084 [2024-12-15 13:37:10.577377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.084 [2024-12-15 13:37:10.581479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.084 [2024-12-15 13:37:10.581527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.084 [2024-12-15 13:37:10.581539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.084 [2024-12-15 13:37:10.585480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.084 [2024-12-15 13:37:10.585527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.084 [2024-12-15 13:37:10.585538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.084 [2024-12-15 13:37:10.589453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.084 [2024-12-15 13:37:10.589500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.084 [2024-12-15 13:37:10.589511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.084 [2024-12-15 13:37:10.592677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.084 [2024-12-15 13:37:10.592725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.084 [2024-12-15 13:37:10.592737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.084 [2024-12-15 13:37:10.596397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.084 [2024-12-15 13:37:10.596446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.084 [2024-12-15 13:37:10.596457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.084 [2024-12-15 13:37:10.599332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.084 [2024-12-15 13:37:10.599380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.084 [2024-12-15 13:37:10.599392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.084 [2024-12-15 13:37:10.603262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.084 [2024-12-15 13:37:10.603315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.084 [2024-12-15 13:37:10.603326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.084 [2024-12-15 13:37:10.606237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.084 [2024-12-15 13:37:10.606286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.084 [2024-12-15 13:37:10.606298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.084 [2024-12-15 13:37:10.609829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.084 [2024-12-15 13:37:10.609864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.084 [2024-12-15 13:37:10.609877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.085 [2024-12-15 13:37:10.613160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.085 [2024-12-15 13:37:10.613206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.085 [2024-12-15 13:37:10.613218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.085 [2024-12-15 13:37:10.616749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.085 [2024-12-15 13:37:10.616796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.085 [2024-12-15 13:37:10.616807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.085 [2024-12-15 13:37:10.620044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.085 [2024-12-15 13:37:10.620092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.085 [2024-12-15 13:37:10.620104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.085 [2024-12-15 13:37:10.623238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.085 [2024-12-15 13:37:10.623287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.085 [2024-12-15 13:37:10.623299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.085 [2024-12-15 13:37:10.626595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.085 [2024-12-15 13:37:10.626655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.085 [2024-12-15 13:37:10.626667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.085 [2024-12-15 13:37:10.630247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.085 [2024-12-15 13:37:10.630295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.085 [2024-12-15 13:37:10.630306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.085 [2024-12-15 13:37:10.633451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.085 [2024-12-15 13:37:10.633498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.085 [2024-12-15 13:37:10.633509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.085 [2024-12-15 13:37:10.636652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.085 [2024-12-15 13:37:10.636700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.085 [2024-12-15 13:37:10.636711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.085 [2024-12-15 13:37:10.639752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.085 [2024-12-15 13:37:10.639800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.085 [2024-12-15 13:37:10.639811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.085 [2024-12-15 13:37:10.643017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.085 [2024-12-15 13:37:10.643068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.085 [2024-12-15 13:37:10.643078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.085 [2024-12-15 13:37:10.646481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.085 [2024-12-15 13:37:10.646531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.085 [2024-12-15 13:37:10.646542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.085 [2024-12-15 13:37:10.649429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.085 [2024-12-15 13:37:10.649475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.085 [2024-12-15 13:37:10.649486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.085 [2024-12-15 13:37:10.652459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.085 [2024-12-15 13:37:10.652505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.085 [2024-12-15 13:37:10.652516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.085 [2024-12-15 13:37:10.655705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.085 [2024-12-15 13:37:10.655753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.085 [2024-12-15 13:37:10.655765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.085 [2024-12-15 13:37:10.658937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.085 [2024-12-15 13:37:10.658987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.085 [2024-12-15 13:37:10.658998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.085 [2024-12-15 13:37:10.662101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.085 [2024-12-15 13:37:10.662150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.085 [2024-12-15 13:37:10.662161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.085 [2024-12-15 13:37:10.665397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.085 [2024-12-15 13:37:10.665443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.085 [2024-12-15 13:37:10.665454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.085 [2024-12-15 13:37:10.668278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.085 [2024-12-15 13:37:10.668324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.085 [2024-12-15 13:37:10.668336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.085 [2024-12-15 13:37:10.671944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.085 [2024-12-15 13:37:10.671998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.085 [2024-12-15 13:37:10.672009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.085 [2024-12-15 13:37:10.674928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.085 [2024-12-15 13:37:10.674990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.085 [2024-12-15 13:37:10.675002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.085 [2024-12-15 13:37:10.677678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.085 [2024-12-15 13:37:10.677728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.085 [2024-12-15 13:37:10.677740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.085 [2024-12-15 13:37:10.681080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.085 [2024-12-15 13:37:10.681127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.085 [2024-12-15 13:37:10.681139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.085 [2024-12-15 13:37:10.684167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.085 [2024-12-15 13:37:10.684214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.085 [2024-12-15 13:37:10.684226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.085 [2024-12-15 13:37:10.687424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.086 [2024-12-15 13:37:10.687475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.086 [2024-12-15 13:37:10.687486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.086 [2024-12-15 13:37:10.691024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.086 [2024-12-15 13:37:10.691073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.086 [2024-12-15 13:37:10.691085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.086 [2024-12-15 13:37:10.694271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.086 [2024-12-15 13:37:10.694321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.086 [2024-12-15 13:37:10.694332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.086 [2024-12-15 13:37:10.697674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.086 [2024-12-15 13:37:10.697707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.086 [2024-12-15 13:37:10.697719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.086 [2024-12-15 13:37:10.700726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.086 [2024-12-15 13:37:10.700773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.086 [2024-12-15 13:37:10.700784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.086 [2024-12-15 13:37:10.703665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.086 [2024-12-15 13:37:10.703714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.086 [2024-12-15 13:37:10.703725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.086 [2024-12-15 13:37:10.707299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.086 [2024-12-15 13:37:10.707348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.086 [2024-12-15 13:37:10.707359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.086 [2024-12-15 13:37:10.710532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.086 [2024-12-15 13:37:10.710579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.086 [2024-12-15 13:37:10.710606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.086 [2024-12-15 13:37:10.713276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.086 [2024-12-15 13:37:10.713322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.086 [2024-12-15 13:37:10.713333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.086 [2024-12-15 13:37:10.716368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.086 [2024-12-15 13:37:10.716415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.086 [2024-12-15 13:37:10.716426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.086 [2024-12-15 13:37:10.719571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.086 [2024-12-15 13:37:10.719644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.086 [2024-12-15 13:37:10.719657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.086 [2024-12-15 13:37:10.723010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.086 [2024-12-15 13:37:10.723059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.086 [2024-12-15 13:37:10.723070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.086 [2024-12-15 13:37:10.726094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.086 [2024-12-15 13:37:10.726142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.086 [2024-12-15 13:37:10.726153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.086 [2024-12-15 13:37:10.729350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.086 [2024-12-15 13:37:10.729396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.086 [2024-12-15 13:37:10.729407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.086 [2024-12-15 13:37:10.732670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.086 [2024-12-15 13:37:10.732715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.086 [2024-12-15 13:37:10.732726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.086 [2024-12-15 13:37:10.736106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.086 [2024-12-15 13:37:10.736154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.086 [2024-12-15 13:37:10.736165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.086 [2024-12-15 13:37:10.738843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.086 [2024-12-15 13:37:10.738893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.086 [2024-12-15 13:37:10.738905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.086 [2024-12-15 13:37:10.742154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.086 [2024-12-15 13:37:10.742202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.086 [2024-12-15 13:37:10.742214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.086 [2024-12-15 13:37:10.745717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.086 [2024-12-15 13:37:10.745751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.086 [2024-12-15 13:37:10.745764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.086 [2024-12-15 13:37:10.748911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.086 [2024-12-15 13:37:10.748944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.086 [2024-12-15 13:37:10.748956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.086 [2024-12-15 13:37:10.752261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.086 [2024-12-15 13:37:10.752293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.086 [2024-12-15 13:37:10.752305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.086 [2024-12-15 13:37:10.756476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.086 [2024-12-15 13:37:10.756525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.086 [2024-12-15 13:37:10.756536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.086 [2024-12-15 13:37:10.760170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.086 [2024-12-15 13:37:10.760218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.086 [2024-12-15 13:37:10.760230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.086 [2024-12-15 13:37:10.763697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.086 [2024-12-15 13:37:10.763747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.086 [2024-12-15 13:37:10.763760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.087 [2024-12-15 13:37:10.767522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.087 [2024-12-15 13:37:10.767572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.087 [2024-12-15 13:37:10.767584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.347 [2024-12-15 13:37:10.771005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.347 [2024-12-15 13:37:10.771054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.347 [2024-12-15 13:37:10.771066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.347 [2024-12-15 13:37:10.774178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.347 [2024-12-15 13:37:10.774227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.347 [2024-12-15 13:37:10.774239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.347 [2024-12-15 13:37:10.777535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.347 [2024-12-15 13:37:10.777592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.347 [2024-12-15 13:37:10.777617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.347 [2024-12-15 13:37:10.781352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.347 [2024-12-15 13:37:10.781401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.347 [2024-12-15 13:37:10.781428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.347 [2024-12-15 13:37:10.784552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.347 [2024-12-15 13:37:10.784610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.347 [2024-12-15 13:37:10.784625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.347 [2024-12-15 13:37:10.787855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.347 [2024-12-15 13:37:10.787903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.347 [2024-12-15 13:37:10.787915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.347 [2024-12-15 13:37:10.791008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.347 [2024-12-15 13:37:10.791037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.347 [2024-12-15 13:37:10.791049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.347 [2024-12-15 13:37:10.794401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.347 [2024-12-15 13:37:10.794428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.347 [2024-12-15 13:37:10.794440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.347 [2024-12-15 13:37:10.797509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.347 [2024-12-15 13:37:10.797530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.347 [2024-12-15 13:37:10.797541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.347 [2024-12-15 13:37:10.801181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.347 [2024-12-15 13:37:10.801213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.347 [2024-12-15 13:37:10.801226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.347 [2024-12-15 13:37:10.805718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.347 [2024-12-15 13:37:10.805751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.347 [2024-12-15 13:37:10.805765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.347 [2024-12-15 13:37:10.809669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.347 [2024-12-15 13:37:10.809716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.347 [2024-12-15 13:37:10.809729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.347 [2024-12-15 13:37:10.813393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.347 [2024-12-15 13:37:10.813427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.347 [2024-12-15 13:37:10.813439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.347 [2024-12-15 13:37:10.816494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.347 [2024-12-15 13:37:10.816541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.347 [2024-12-15 13:37:10.816553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.347 [2024-12-15 13:37:10.819642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.347 [2024-12-15 13:37:10.819688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.347 [2024-12-15 13:37:10.819700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.347 [2024-12-15 13:37:10.823416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.347 [2024-12-15 13:37:10.823465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.347 [2024-12-15 13:37:10.823476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.347 [2024-12-15 13:37:10.826946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.347 [2024-12-15 13:37:10.826994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.347 [2024-12-15 13:37:10.827006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.347 [2024-12-15 13:37:10.830201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.347 [2024-12-15 13:37:10.830264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.347 [2024-12-15 13:37:10.830276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.347 [2024-12-15 13:37:10.833473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.347 [2024-12-15 13:37:10.833520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.347 [2024-12-15 13:37:10.833532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.347 [2024-12-15 13:37:10.837061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.347 [2024-12-15 13:37:10.837108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.347 [2024-12-15 13:37:10.837119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.347 [2024-12-15 13:37:10.840628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.347 [2024-12-15 13:37:10.840675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.347 [2024-12-15 13:37:10.840686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.347 [2024-12-15 13:37:10.844306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.347 [2024-12-15 13:37:10.844353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.347 [2024-12-15 13:37:10.844365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.347 [2024-12-15 13:37:10.847745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.347 [2024-12-15 13:37:10.847780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.348 [2024-12-15 13:37:10.847792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.348 [2024-12-15 13:37:10.851118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.348 [2024-12-15 13:37:10.851169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.348 [2024-12-15 13:37:10.851180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.348 [2024-12-15 13:37:10.854361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.348 [2024-12-15 13:37:10.854410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.348 [2024-12-15 13:37:10.854422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.348 [2024-12-15 13:37:10.858023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.348 [2024-12-15 13:37:10.858074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.348 [2024-12-15 13:37:10.858085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.348 [2024-12-15 13:37:10.862021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.348 [2024-12-15 13:37:10.862070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.348 [2024-12-15 13:37:10.862083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.348 [2024-12-15 13:37:10.865035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.348 [2024-12-15 13:37:10.865082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.348 [2024-12-15 13:37:10.865094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.348 [2024-12-15 13:37:10.868431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.348 [2024-12-15 13:37:10.868479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.348 [2024-12-15 13:37:10.868491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.348 [2024-12-15 13:37:10.871678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.348 [2024-12-15 13:37:10.871725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.348 [2024-12-15 13:37:10.871736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.348 [2024-12-15 13:37:10.874967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.348 [2024-12-15 13:37:10.874999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.348 [2024-12-15 13:37:10.875015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.348 [2024-12-15 13:37:10.878381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.348 [2024-12-15 13:37:10.878428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.348 [2024-12-15 13:37:10.878440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.348 [2024-12-15 13:37:10.882124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.348 [2024-12-15 13:37:10.882173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.348 [2024-12-15 13:37:10.882185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.348 [2024-12-15 13:37:10.885650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.348 [2024-12-15 13:37:10.885682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.348 [2024-12-15 13:37:10.885693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.348 [2024-12-15 13:37:10.888716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.348 [2024-12-15 13:37:10.888763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.348 [2024-12-15 13:37:10.888774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.348 [2024-12-15 13:37:10.891979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.348 [2024-12-15 13:37:10.892025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.348 [2024-12-15 13:37:10.892037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.348 [2024-12-15 13:37:10.895161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.348 [2024-12-15 13:37:10.895209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.348 [2024-12-15 13:37:10.895221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.348 [2024-12-15 13:37:10.898366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.348 [2024-12-15 13:37:10.898413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.348 [2024-12-15 13:37:10.898425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.348 [2024-12-15 13:37:10.901821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.348 [2024-12-15 13:37:10.901856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.348 [2024-12-15 13:37:10.901868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.348 [2024-12-15 13:37:10.905209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.348 [2024-12-15 13:37:10.905242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.348 [2024-12-15 13:37:10.905253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.348 [2024-12-15 13:37:10.908561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.348 [2024-12-15 13:37:10.908621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.348 [2024-12-15 13:37:10.908634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.348 [2024-12-15 13:37:10.912042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.348 [2024-12-15 13:37:10.912091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.348 [2024-12-15 13:37:10.912103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.348 [2024-12-15 13:37:10.915837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.348 [2024-12-15 13:37:10.915885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.348 [2024-12-15 13:37:10.915897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.348 [2024-12-15 13:37:10.919449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.348 [2024-12-15 13:37:10.919480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.348 [2024-12-15 13:37:10.919493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.348 [2024-12-15 13:37:10.923066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.348 [2024-12-15 13:37:10.923114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.348 [2024-12-15 13:37:10.923126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.348 [2024-12-15 13:37:10.926195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.348 [2024-12-15 13:37:10.926243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.348 [2024-12-15 13:37:10.926254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.348 [2024-12-15 13:37:10.929627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.349 [2024-12-15 13:37:10.929658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.349 [2024-12-15 13:37:10.929671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.349 [2024-12-15 13:37:10.932595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.349 [2024-12-15 13:37:10.932669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.349 [2024-12-15 13:37:10.932680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.349 [2024-12-15 13:37:10.936040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.349 [2024-12-15 13:37:10.936088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.349 [2024-12-15 13:37:10.936100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.349 [2024-12-15 13:37:10.939213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.349 [2024-12-15 13:37:10.939260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.349 [2024-12-15 13:37:10.939271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.349 [2024-12-15 13:37:10.942789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.349 [2024-12-15 13:37:10.942836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.349 [2024-12-15 13:37:10.942847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.349 [2024-12-15 13:37:10.946085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.349 [2024-12-15 13:37:10.946134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.349 [2024-12-15 13:37:10.946146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.349 [2024-12-15 13:37:10.949320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.349 [2024-12-15 13:37:10.949365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.349 [2024-12-15 13:37:10.949377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.349 [2024-12-15 13:37:10.952715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.349 [2024-12-15 13:37:10.952762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.349 [2024-12-15 13:37:10.952773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.349 [2024-12-15 13:37:10.956142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.349 [2024-12-15 13:37:10.956190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.349 [2024-12-15 13:37:10.956202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.349 [2024-12-15 13:37:10.959241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.349 [2024-12-15 13:37:10.959288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.349 [2024-12-15 13:37:10.959299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.349 [2024-12-15 13:37:10.962309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.349 [2024-12-15 13:37:10.962357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.349 [2024-12-15 13:37:10.962368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.349 [2024-12-15 13:37:10.965622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.349 [2024-12-15 13:37:10.965670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.349 [2024-12-15 13:37:10.965682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.349 [2024-12-15 13:37:10.968293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.349 [2024-12-15 13:37:10.968340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.349 [2024-12-15 13:37:10.968351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.349 [2024-12-15 13:37:10.971128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.349 [2024-12-15 13:37:10.971176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.349 [2024-12-15 13:37:10.971188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.349 [2024-12-15 13:37:10.974741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.349 [2024-12-15 13:37:10.974789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.349 [2024-12-15 13:37:10.974800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.349 [2024-12-15 13:37:10.977846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.349 [2024-12-15 13:37:10.977895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.349 [2024-12-15 13:37:10.977908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.349 [2024-12-15 13:37:10.981126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.349 [2024-12-15 13:37:10.981173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.349 [2024-12-15 13:37:10.981184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.349 [2024-12-15 13:37:10.984112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.349 [2024-12-15 13:37:10.984158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.349 [2024-12-15 13:37:10.984169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.349 [2024-12-15 13:37:10.987508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.349 [2024-12-15 13:37:10.987556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.349 [2024-12-15 13:37:10.987568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.349 [2024-12-15 13:37:10.990556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.349 [2024-12-15 13:37:10.990615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.349 [2024-12-15 13:37:10.990629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.349 [2024-12-15 13:37:10.993973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.349 [2024-12-15 13:37:10.994051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.349 [2024-12-15 13:37:10.994063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.349 [2024-12-15 13:37:10.997200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.349 [2024-12-15 13:37:10.997245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.349 [2024-12-15 13:37:10.997256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.349 [2024-12-15 13:37:11.000449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.349 [2024-12-15 13:37:11.000496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.349 [2024-12-15 13:37:11.000507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.349 [2024-12-15 13:37:11.003665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.349 [2024-12-15 13:37:11.003715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.350 [2024-12-15 13:37:11.003726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.350 [2024-12-15 13:37:11.007181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.350 [2024-12-15 13:37:11.007227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.350 [2024-12-15 13:37:11.007239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.350 [2024-12-15 13:37:11.010048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.350 [2024-12-15 13:37:11.010095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.350 [2024-12-15 13:37:11.010107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.350 [2024-12-15 13:37:11.013448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.350 [2024-12-15 13:37:11.013494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.350 [2024-12-15 13:37:11.013506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.350 [2024-12-15 13:37:11.016819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.350 [2024-12-15 13:37:11.016865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.350 [2024-12-15 13:37:11.016877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.350 [2024-12-15 13:37:11.019836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.350 [2024-12-15 13:37:11.019885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.350 [2024-12-15 13:37:11.019897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.350 [2024-12-15 13:37:11.023087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.350 [2024-12-15 13:37:11.023136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.350 [2024-12-15 13:37:11.023148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.350 [2024-12-15 13:37:11.026763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.350 [2024-12-15 13:37:11.026812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.350 [2024-12-15 13:37:11.026823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.350 [2024-12-15 13:37:11.029668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.350 [2024-12-15 13:37:11.029701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.350 [2024-12-15 13:37:11.029712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.350 [2024-12-15 13:37:11.032989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.350 [2024-12-15 13:37:11.033035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.350 [2024-12-15 13:37:11.033046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.611 [2024-12-15 13:37:11.035868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.611 [2024-12-15 13:37:11.035914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.611 [2024-12-15 13:37:11.035926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.611 [2024-12-15 13:37:11.039264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.611 [2024-12-15 13:37:11.039313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.611 [2024-12-15 13:37:11.039324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.611 [2024-12-15 13:37:11.042218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.611 [2024-12-15 13:37:11.042266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.611 [2024-12-15 13:37:11.042278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.611 [2024-12-15 13:37:11.045370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.611 [2024-12-15 13:37:11.045417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.611 [2024-12-15 13:37:11.045429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.611 [2024-12-15 13:37:11.048889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.611 [2024-12-15 13:37:11.048938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.611 [2024-12-15 13:37:11.048950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.611 [2024-12-15 13:37:11.051977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.611 [2024-12-15 13:37:11.052024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.611 [2024-12-15 13:37:11.052035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.611 [2024-12-15 13:37:11.055505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.611 [2024-12-15 13:37:11.055550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.611 [2024-12-15 13:37:11.055561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.611 [2024-12-15 13:37:11.058787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.611 [2024-12-15 13:37:11.058836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.611 [2024-12-15 13:37:11.058847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.611 [2024-12-15 13:37:11.061855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.611 [2024-12-15 13:37:11.061919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.611 [2024-12-15 13:37:11.061946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.611 [2024-12-15 13:37:11.065087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.611 [2024-12-15 13:37:11.065133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.611 [2024-12-15 13:37:11.065144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.611 [2024-12-15 13:37:11.068035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.611 [2024-12-15 13:37:11.068080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.611 [2024-12-15 13:37:11.068091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.611 [2024-12-15 13:37:11.070883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.611 [2024-12-15 13:37:11.070930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.611 [2024-12-15 13:37:11.070942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.611 [2024-12-15 13:37:11.073894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.611 [2024-12-15 13:37:11.073959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.611 [2024-12-15 13:37:11.073971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.611 [2024-12-15 13:37:11.077330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.611 [2024-12-15 13:37:11.077376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.611 [2024-12-15 13:37:11.077388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.611 [2024-12-15 13:37:11.080505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.611 [2024-12-15 13:37:11.080536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.611 [2024-12-15 13:37:11.080547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.611 [2024-12-15 13:37:11.083962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.611 [2024-12-15 13:37:11.083992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.612 [2024-12-15 13:37:11.084004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.612 [2024-12-15 13:37:11.087655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.612 [2024-12-15 13:37:11.087688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.612 [2024-12-15 13:37:11.087699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.612 [2024-12-15 13:37:11.091014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.612 [2024-12-15 13:37:11.091062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.612 [2024-12-15 13:37:11.091083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.612 [2024-12-15 13:37:11.094153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.612 [2024-12-15 13:37:11.094200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.612 [2024-12-15 13:37:11.094211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.612 [2024-12-15 13:37:11.096984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.612 [2024-12-15 13:37:11.097030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.612 [2024-12-15 13:37:11.097042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.612 [2024-12-15 13:37:11.099702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.612 [2024-12-15 13:37:11.099748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.612 [2024-12-15 13:37:11.099760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.612 [2024-12-15 13:37:11.102248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.612 [2024-12-15 13:37:11.102296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.612 [2024-12-15 13:37:11.102307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.612 [2024-12-15 13:37:11.105758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.612 [2024-12-15 13:37:11.105792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.612 [2024-12-15 13:37:11.105804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.612 [2024-12-15 13:37:11.109039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.612 [2024-12-15 13:37:11.109085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.612 [2024-12-15 13:37:11.109096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.612 [2024-12-15 13:37:11.112321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.612 [2024-12-15 13:37:11.112367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.612 [2024-12-15 13:37:11.112379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.612 [2024-12-15 13:37:11.115488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.612 [2024-12-15 13:37:11.115536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.612 [2024-12-15 13:37:11.115547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.612 [2024-12-15 13:37:11.118752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.612 [2024-12-15 13:37:11.118800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.612 [2024-12-15 13:37:11.118811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.612 [2024-12-15 13:37:11.122076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.612 [2024-12-15 13:37:11.122124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.612 [2024-12-15 13:37:11.122135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.612 [2024-12-15 13:37:11.125545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.612 [2024-12-15 13:37:11.125660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.612 [2024-12-15 13:37:11.125672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.612 [2024-12-15 13:37:11.129080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.612 [2024-12-15 13:37:11.129127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.612 [2024-12-15 13:37:11.129138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.612 [2024-12-15 13:37:11.132645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.612 [2024-12-15 13:37:11.132691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.612 [2024-12-15 13:37:11.132702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.612 [2024-12-15 13:37:11.135574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.612 [2024-12-15 13:37:11.135646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.612 [2024-12-15 13:37:11.135658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.612 [2024-12-15 13:37:11.138648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.612 [2024-12-15 13:37:11.138694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.612 [2024-12-15 13:37:11.138705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.612 [2024-12-15 13:37:11.141506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.612 [2024-12-15 13:37:11.141558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.612 [2024-12-15 13:37:11.141587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.612 [2024-12-15 13:37:11.144978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.612 [2024-12-15 13:37:11.145023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.612 [2024-12-15 13:37:11.145034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.612 [2024-12-15 13:37:11.148441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.612 [2024-12-15 13:37:11.148487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.612 [2024-12-15 13:37:11.148498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.612 [2024-12-15 13:37:11.152346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.612 [2024-12-15 13:37:11.152392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.612 [2024-12-15 13:37:11.152404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.612 [2024-12-15 13:37:11.155787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.612 [2024-12-15 13:37:11.155837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.612 [2024-12-15 13:37:11.155848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.612 [2024-12-15 13:37:11.158895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.612 [2024-12-15 13:37:11.158943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.612 [2024-12-15 13:37:11.158955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.612 [2024-12-15 13:37:11.162135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.613 [2024-12-15 13:37:11.162182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.613 [2024-12-15 13:37:11.162193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.613 [2024-12-15 13:37:11.165423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.613 [2024-12-15 13:37:11.165469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.613 [2024-12-15 13:37:11.165480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.613 [2024-12-15 13:37:11.168562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.613 [2024-12-15 13:37:11.168622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.613 [2024-12-15 13:37:11.168634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.613 [2024-12-15 13:37:11.172030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.613 [2024-12-15 13:37:11.172078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.613 [2024-12-15 13:37:11.172090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.613 [2024-12-15 13:37:11.175101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.613 [2024-12-15 13:37:11.175150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.613 [2024-12-15 13:37:11.175162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.613 [2024-12-15 13:37:11.178474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.613 [2024-12-15 13:37:11.178522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.613 [2024-12-15 13:37:11.178534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.613 [2024-12-15 13:37:11.181308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.613 [2024-12-15 13:37:11.181355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.613 [2024-12-15 13:37:11.181366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.613 [2024-12-15 13:37:11.184810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.613 [2024-12-15 13:37:11.184856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.613 [2024-12-15 13:37:11.184868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.613 [2024-12-15 13:37:11.188019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.613 [2024-12-15 13:37:11.188065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.613 [2024-12-15 13:37:11.188076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.613 [2024-12-15 13:37:11.191441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.613 [2024-12-15 13:37:11.191489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.613 [2024-12-15 13:37:11.191500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.613 [2024-12-15 13:37:11.194643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.613 [2024-12-15 13:37:11.194689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.613 [2024-12-15 13:37:11.194700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.613 [2024-12-15 13:37:11.197801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.613 [2024-12-15 13:37:11.197834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.613 [2024-12-15 13:37:11.197846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.613 [2024-12-15 13:37:11.200664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.613 [2024-12-15 13:37:11.200709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.613 [2024-12-15 13:37:11.200720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.613 [2024-12-15 13:37:11.204189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.613 [2024-12-15 13:37:11.204235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.613 [2024-12-15 13:37:11.204247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.613 [2024-12-15 13:37:11.207314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.613 [2024-12-15 13:37:11.207363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.613 [2024-12-15 13:37:11.207374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.613 [2024-12-15 13:37:11.210308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.613 [2024-12-15 13:37:11.210357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.613 [2024-12-15 13:37:11.210368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.613 [2024-12-15 13:37:11.213744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.613 [2024-12-15 13:37:11.213778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.613 [2024-12-15 13:37:11.213790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.613 [2024-12-15 13:37:11.216771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.613 [2024-12-15 13:37:11.216818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.613 [2024-12-15 13:37:11.216829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.613 [2024-12-15 13:37:11.219634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.613 [2024-12-15 13:37:11.219680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.613 [2024-12-15 13:37:11.219692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.613 [2024-12-15 13:37:11.222559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.613 [2024-12-15 13:37:11.222619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.613 [2024-12-15 13:37:11.222631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.613 [2024-12-15 13:37:11.225328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.613 [2024-12-15 13:37:11.225375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.613 [2024-12-15 13:37:11.225387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.613 [2024-12-15 13:37:11.228863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.613 [2024-12-15 13:37:11.228911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.613 [2024-12-15 13:37:11.228922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.613 [2024-12-15 13:37:11.232202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.613 [2024-12-15 13:37:11.232248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.613 [2024-12-15 13:37:11.232260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.613 [2024-12-15 13:37:11.235716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.613 [2024-12-15 13:37:11.235763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.613 [2024-12-15 13:37:11.235774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.614 [2024-12-15 13:37:11.238576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.614 [2024-12-15 13:37:11.238631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.614 [2024-12-15 13:37:11.238643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.614 [2024-12-15 13:37:11.242205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.614 [2024-12-15 13:37:11.242251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.614 [2024-12-15 13:37:11.242262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.614 [2024-12-15 13:37:11.244895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.614 [2024-12-15 13:37:11.244927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.614 [2024-12-15 13:37:11.244938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.614 [2024-12-15 13:37:11.248151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.614 [2024-12-15 13:37:11.248198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.614 [2024-12-15 13:37:11.248210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.614 [2024-12-15 13:37:11.251750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.614 [2024-12-15 13:37:11.251795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.614 [2024-12-15 13:37:11.251806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.614 [2024-12-15 13:37:11.254818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.614 [2024-12-15 13:37:11.254864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.614 [2024-12-15 13:37:11.254877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.614 [2024-12-15 13:37:11.257965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.614 [2024-12-15 13:37:11.258012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.614 [2024-12-15 13:37:11.258023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.614 [2024-12-15 13:37:11.261115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.614 [2024-12-15 13:37:11.261161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.614 [2024-12-15 13:37:11.261172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.614 [2024-12-15 13:37:11.264188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.614 [2024-12-15 13:37:11.264234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.614 [2024-12-15 13:37:11.264246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.614 [2024-12-15 13:37:11.267465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.614 [2024-12-15 13:37:11.267513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.614 [2024-12-15 13:37:11.267524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.614 [2024-12-15 13:37:11.270706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.614 [2024-12-15 13:37:11.270766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.614 [2024-12-15 13:37:11.270778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.614 [2024-12-15 13:37:11.274131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.614 [2024-12-15 13:37:11.274179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.614 [2024-12-15 13:37:11.274190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.614 [2024-12-15 13:37:11.276853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.614 [2024-12-15 13:37:11.276899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.614 [2024-12-15 13:37:11.276910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.614 [2024-12-15 13:37:11.279665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.614 [2024-12-15 13:37:11.279711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.614 [2024-12-15 13:37:11.279722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.614 [2024-12-15 13:37:11.283141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.614 [2024-12-15 13:37:11.283189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.614 [2024-12-15 13:37:11.283201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.614 [2024-12-15 13:37:11.286547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.614 [2024-12-15 13:37:11.286595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.614 [2024-12-15 13:37:11.286617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.614 [2024-12-15 13:37:11.289527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.614 [2024-12-15 13:37:11.289595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.614 [2024-12-15 13:37:11.289620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.614 [2024-12-15 13:37:11.293021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.614 [2024-12-15 13:37:11.293067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.614 [2024-12-15 13:37:11.293078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.614 [2024-12-15 13:37:11.296198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.614 [2024-12-15 13:37:11.296246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.614 [2024-12-15 13:37:11.296257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.875 [2024-12-15 13:37:11.299138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.875 [2024-12-15 13:37:11.299186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.875 [2024-12-15 13:37:11.299197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.875 [2024-12-15 13:37:11.302624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.875 [2024-12-15 13:37:11.302665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.875 [2024-12-15 13:37:11.302678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.875 [2024-12-15 13:37:11.306346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.875 [2024-12-15 13:37:11.306378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.875 [2024-12-15 13:37:11.306391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.875 [2024-12-15 13:37:11.310160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.875 [2024-12-15 13:37:11.310198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.875 [2024-12-15 13:37:11.310212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.875 [2024-12-15 13:37:11.313443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.875 [2024-12-15 13:37:11.313490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.875 [2024-12-15 13:37:11.313501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.875 [2024-12-15 13:37:11.316790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.875 [2024-12-15 13:37:11.316838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.875 [2024-12-15 13:37:11.316850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.875 [2024-12-15 13:37:11.320065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.875 [2024-12-15 13:37:11.320110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.875 [2024-12-15 13:37:11.320122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.875 [2024-12-15 13:37:11.323044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.875 [2024-12-15 13:37:11.323092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.875 [2024-12-15 13:37:11.323103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.875 [2024-12-15 13:37:11.325977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.875 [2024-12-15 13:37:11.326023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.875 [2024-12-15 13:37:11.326035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.875 [2024-12-15 13:37:11.329214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.875 [2024-12-15 13:37:11.329259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.875 [2024-12-15 13:37:11.329270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.875 [2024-12-15 13:37:11.332166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.875 [2024-12-15 13:37:11.332212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.875 [2024-12-15 13:37:11.332223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.875 [2024-12-15 13:37:11.335527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.875 [2024-12-15 13:37:11.335574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.875 [2024-12-15 13:37:11.335586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.876 [2024-12-15 13:37:11.338919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.876 [2024-12-15 13:37:11.338967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.876 [2024-12-15 13:37:11.338978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.876 [2024-12-15 13:37:11.342197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.876 [2024-12-15 13:37:11.342247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.876 [2024-12-15 13:37:11.342259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.876 [2024-12-15 13:37:11.345709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.876 [2024-12-15 13:37:11.345742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.876 [2024-12-15 13:37:11.345754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.876 [2024-12-15 13:37:11.349102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.876 [2024-12-15 13:37:11.349148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.876 [2024-12-15 13:37:11.349159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.876 [2024-12-15 13:37:11.352726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.876 [2024-12-15 13:37:11.352773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.876 [2024-12-15 13:37:11.352784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.876 [2024-12-15 13:37:11.356094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.876 [2024-12-15 13:37:11.356126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.876 [2024-12-15 13:37:11.356137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.876 [2024-12-15 13:37:11.359494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.876 [2024-12-15 13:37:11.359539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.876 [2024-12-15 13:37:11.359550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.876 [2024-12-15 13:37:11.362757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.876 [2024-12-15 13:37:11.362803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.876 [2024-12-15 13:37:11.362814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.876 [2024-12-15 13:37:11.365930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.876 [2024-12-15 13:37:11.365984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.876 [2024-12-15 13:37:11.366011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.876 [2024-12-15 13:37:11.369397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.876 [2024-12-15 13:37:11.369444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.876 [2024-12-15 13:37:11.369456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.876 [2024-12-15 13:37:11.372396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.876 [2024-12-15 13:37:11.372442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.876 [2024-12-15 13:37:11.372453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.876 [2024-12-15 13:37:11.375552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.876 [2024-12-15 13:37:11.375610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.876 [2024-12-15 13:37:11.375623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.876 [2024-12-15 13:37:11.378683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.876 [2024-12-15 13:37:11.378730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.876 [2024-12-15 13:37:11.378741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.876 [2024-12-15 13:37:11.382290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.876 [2024-12-15 13:37:11.382338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.876 [2024-12-15 13:37:11.382350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.876 [2024-12-15 13:37:11.385466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.876 [2024-12-15 13:37:11.385511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.876 [2024-12-15 13:37:11.385522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.876 [2024-12-15 13:37:11.387882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.876 [2024-12-15 13:37:11.387927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.876 [2024-12-15 13:37:11.387938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.876 [2024-12-15 13:37:11.391237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.876 [2024-12-15 13:37:11.391285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.876 [2024-12-15 13:37:11.391296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.876 [2024-12-15 13:37:11.394838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.876 [2024-12-15 13:37:11.394886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.876 [2024-12-15 13:37:11.394897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.876 [2024-12-15 13:37:11.398580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.876 [2024-12-15 13:37:11.398638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.876 [2024-12-15 13:37:11.398650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.876 [2024-12-15 13:37:11.401522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.876 [2024-12-15 13:37:11.401617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.876 [2024-12-15 13:37:11.401632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.876 [2024-12-15 13:37:11.405033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.876 [2024-12-15 13:37:11.405081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.876 [2024-12-15 13:37:11.405093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.876 [2024-12-15 13:37:11.408413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.876 [2024-12-15 13:37:11.408460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.876 [2024-12-15 13:37:11.408472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.876 [2024-12-15 13:37:11.411194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.876 [2024-12-15 13:37:11.411241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.876 [2024-12-15 13:37:11.411253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.876 [2024-12-15 13:37:11.414609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.876 [2024-12-15 13:37:11.414666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.877 [2024-12-15 13:37:11.414677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.877 [2024-12-15 13:37:11.417691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.877 [2024-12-15 13:37:11.417724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.877 [2024-12-15 13:37:11.417735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.877 [2024-12-15 13:37:11.420772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.877 [2024-12-15 13:37:11.420819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.877 [2024-12-15 13:37:11.420830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.877 [2024-12-15 13:37:11.424192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.877 [2024-12-15 13:37:11.424238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.877 [2024-12-15 13:37:11.424249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.877 [2024-12-15 13:37:11.427394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.877 [2024-12-15 13:37:11.427442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.877 [2024-12-15 13:37:11.427454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.877 [2024-12-15 13:37:11.430381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.877 [2024-12-15 13:37:11.430430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.877 [2024-12-15 13:37:11.430441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.877 [2024-12-15 13:37:11.433579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.877 [2024-12-15 13:37:11.433619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.877 [2024-12-15 13:37:11.433632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.877 [2024-12-15 13:37:11.436832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.877 [2024-12-15 13:37:11.436879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.877 [2024-12-15 13:37:11.436891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.877 [2024-12-15 13:37:11.440178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.877 [2024-12-15 13:37:11.440230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.877 [2024-12-15 13:37:11.440241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.877 [2024-12-15 13:37:11.443040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.877 [2024-12-15 13:37:11.443088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.877 [2024-12-15 13:37:11.443100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.877 [2024-12-15 13:37:11.445942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.877 [2024-12-15 13:37:11.446005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.877 [2024-12-15 13:37:11.446016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.877 [2024-12-15 13:37:11.449184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.877 [2024-12-15 13:37:11.449230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.877 [2024-12-15 13:37:11.449241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.877 [2024-12-15 13:37:11.452525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.877 [2024-12-15 13:37:11.452572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.877 [2024-12-15 13:37:11.452583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.877 [2024-12-15 13:37:11.455307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.877 [2024-12-15 13:37:11.455355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.877 [2024-12-15 13:37:11.455366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.877 [2024-12-15 13:37:11.458649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.877 [2024-12-15 13:37:11.458709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.877 [2024-12-15 13:37:11.458721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.877 [2024-12-15 13:37:11.462203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.877 [2024-12-15 13:37:11.462251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.877 [2024-12-15 13:37:11.462263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.877 [2024-12-15 13:37:11.465498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.877 [2024-12-15 13:37:11.465544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.877 [2024-12-15 13:37:11.465580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.877 [2024-12-15 13:37:11.468492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.877 [2024-12-15 13:37:11.468538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.877 [2024-12-15 13:37:11.468549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.877 [2024-12-15 13:37:11.471888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.877 [2024-12-15 13:37:11.471935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.877 [2024-12-15 13:37:11.471947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.877 [2024-12-15 13:37:11.475386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.877 [2024-12-15 13:37:11.475434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.877 [2024-12-15 13:37:11.475446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.877 [2024-12-15 13:37:11.478892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.877 [2024-12-15 13:37:11.478940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.877 [2024-12-15 13:37:11.478951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.877 [2024-12-15 13:37:11.482662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.877 [2024-12-15 13:37:11.482710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.877 [2024-12-15 13:37:11.482721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.877 [2024-12-15 13:37:11.485674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.877 [2024-12-15 13:37:11.485725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.877 [2024-12-15 13:37:11.485737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.878 [2024-12-15 13:37:11.488827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.878 [2024-12-15 13:37:11.488873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.878 [2024-12-15 13:37:11.488884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.878 [2024-12-15 13:37:11.492198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.878 [2024-12-15 13:37:11.492248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.878 [2024-12-15 13:37:11.492260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.878 [2024-12-15 13:37:11.495414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.878 [2024-12-15 13:37:11.495462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.878 [2024-12-15 13:37:11.495473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.878 [2024-12-15 13:37:11.498511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.878 [2024-12-15 13:37:11.498559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.878 [2024-12-15 13:37:11.498571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.878 [2024-12-15 13:37:11.502390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.878 [2024-12-15 13:37:11.502438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.878 [2024-12-15 13:37:11.502450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.878 [2024-12-15 13:37:11.505121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.878 [2024-12-15 13:37:11.505151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.878 [2024-12-15 13:37:11.505179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.878 [2024-12-15 13:37:11.508677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.878 [2024-12-15 13:37:11.508722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.878 [2024-12-15 13:37:11.508734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.878 [2024-12-15 13:37:11.511804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.878 [2024-12-15 13:37:11.511852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.878 [2024-12-15 13:37:11.511864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.878 [2024-12-15 13:37:11.514897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.878 [2024-12-15 13:37:11.514946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.878 [2024-12-15 13:37:11.514957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.878 [2024-12-15 13:37:11.517929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.878 [2024-12-15 13:37:11.517985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.878 [2024-12-15 13:37:11.518028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.878 [2024-12-15 13:37:11.520946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.878 [2024-12-15 13:37:11.520992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.878 [2024-12-15 13:37:11.521004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.878 [2024-12-15 13:37:11.524188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.878 [2024-12-15 13:37:11.524234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.878 [2024-12-15 13:37:11.524245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.878 [2024-12-15 13:37:11.527700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.878 [2024-12-15 13:37:11.527745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.878 [2024-12-15 13:37:11.527757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.878 [2024-12-15 13:37:11.530977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.878 [2024-12-15 13:37:11.531061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.878 [2024-12-15 13:37:11.531072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.878 [2024-12-15 13:37:11.534431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.878 [2024-12-15 13:37:11.534480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.878 [2024-12-15 13:37:11.534491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.878 [2024-12-15 13:37:11.537173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.878 [2024-12-15 13:37:11.537219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.878 [2024-12-15 13:37:11.537230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.878 [2024-12-15 13:37:11.540492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.878 [2024-12-15 13:37:11.540537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.878 [2024-12-15 13:37:11.540550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.878 [2024-12-15 13:37:11.543530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.878 [2024-12-15 13:37:11.543579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.878 [2024-12-15 13:37:11.543591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.878 [2024-12-15 13:37:11.546540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.878 [2024-12-15 13:37:11.546589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.878 [2024-12-15 13:37:11.546610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:05.878 [2024-12-15 13:37:11.549953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.878 [2024-12-15 13:37:11.550001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.878 [2024-12-15 13:37:11.550029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:05.878 [2024-12-15 13:37:11.553243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.878 [2024-12-15 13:37:11.553289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.878 [2024-12-15 13:37:11.553300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:05.878 [2024-12-15 13:37:11.555710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.878 [2024-12-15 13:37:11.555756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.879 [2024-12-15 13:37:11.555767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:05.879 [2024-12-15 13:37:11.558660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:05.879 [2024-12-15 13:37:11.558706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.879 [2024-12-15 13:37:11.558717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.139 [2024-12-15 13:37:11.562157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.139 [2024-12-15 13:37:11.562204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.139 [2024-12-15 13:37:11.562216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.139 [2024-12-15 13:37:11.565283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.139 [2024-12-15 13:37:11.565329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.139 [2024-12-15 13:37:11.565340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.139 [2024-12-15 13:37:11.568343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.139 [2024-12-15 13:37:11.568391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.139 [2024-12-15 13:37:11.568402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.139 [2024-12-15 13:37:11.571675] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.139 [2024-12-15 13:37:11.571723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.139 [2024-12-15 13:37:11.571734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.139 [2024-12-15 13:37:11.574589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.139 [2024-12-15 13:37:11.574647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.139 [2024-12-15 13:37:11.574675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.139 [2024-12-15 13:37:11.578197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.139 [2024-12-15 13:37:11.578244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.139 [2024-12-15 13:37:11.578256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.139 [2024-12-15 13:37:11.581780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.139 [2024-12-15 13:37:11.581816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.139 [2024-12-15 13:37:11.581830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.139 [2024-12-15 13:37:11.585389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.139 [2024-12-15 13:37:11.585437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.139 [2024-12-15 13:37:11.585448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.139 [2024-12-15 13:37:11.588686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.139 [2024-12-15 13:37:11.588733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.139 [2024-12-15 13:37:11.588745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.139 [2024-12-15 13:37:11.591912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.139 [2024-12-15 13:37:11.592012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.139 [2024-12-15 13:37:11.592024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.139 [2024-12-15 13:37:11.595673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.139 [2024-12-15 13:37:11.595733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.139 [2024-12-15 13:37:11.595746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.139 [2024-12-15 13:37:11.599208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.140 [2024-12-15 13:37:11.599257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.140 [2024-12-15 13:37:11.599269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.140 [2024-12-15 13:37:11.603473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.140 [2024-12-15 13:37:11.603502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.140 [2024-12-15 13:37:11.603515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.140 [2024-12-15 13:37:11.606868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.140 [2024-12-15 13:37:11.606920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.140 [2024-12-15 13:37:11.606933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.140 [2024-12-15 13:37:11.610391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.140 [2024-12-15 13:37:11.610437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.140 [2024-12-15 13:37:11.610448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.140 [2024-12-15 13:37:11.614009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.140 [2024-12-15 13:37:11.614057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.140 [2024-12-15 13:37:11.614069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.140 [2024-12-15 13:37:11.617219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.140 [2024-12-15 13:37:11.617265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.140 [2024-12-15 13:37:11.617276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.140 [2024-12-15 13:37:11.620758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.140 [2024-12-15 13:37:11.620811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.140 [2024-12-15 13:37:11.620824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.140 [2024-12-15 13:37:11.624525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.140 [2024-12-15 13:37:11.624557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.140 [2024-12-15 13:37:11.624568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.140 [2024-12-15 13:37:11.628000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.140 [2024-12-15 13:37:11.628031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.140 [2024-12-15 13:37:11.628042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.140 [2024-12-15 13:37:11.631924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.140 [2024-12-15 13:37:11.631973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.140 [2024-12-15 13:37:11.632001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.140 [2024-12-15 13:37:11.635408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.140 [2024-12-15 13:37:11.635455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.140 [2024-12-15 13:37:11.635466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.140 [2024-12-15 13:37:11.638784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.140 [2024-12-15 13:37:11.638832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.140 [2024-12-15 13:37:11.638844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.140 [2024-12-15 13:37:11.642394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.140 [2024-12-15 13:37:11.642442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.140 [2024-12-15 13:37:11.642454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.140 [2024-12-15 13:37:11.645219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.140 [2024-12-15 13:37:11.645265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.140 [2024-12-15 13:37:11.645276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.140 [2024-12-15 13:37:11.648906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.140 [2024-12-15 13:37:11.648954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.140 [2024-12-15 13:37:11.648966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.140 [2024-12-15 13:37:11.652470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.140 [2024-12-15 13:37:11.652516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.140 [2024-12-15 13:37:11.652528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.140 [2024-12-15 13:37:11.655722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.140 [2024-12-15 13:37:11.655770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.140 [2024-12-15 13:37:11.655781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.140 [2024-12-15 13:37:11.658887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.140 [2024-12-15 13:37:11.658935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.140 [2024-12-15 13:37:11.658947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.140 [2024-12-15 13:37:11.662142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.140 [2024-12-15 13:37:11.662189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.140 [2024-12-15 13:37:11.662200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.140 [2024-12-15 13:37:11.664507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.140 [2024-12-15 13:37:11.664553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.140 [2024-12-15 13:37:11.664565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.140 [2024-12-15 13:37:11.667398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.140 [2024-12-15 13:37:11.667445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.140 [2024-12-15 13:37:11.667456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.140 [2024-12-15 13:37:11.670876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.140 [2024-12-15 13:37:11.670923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.140 [2024-12-15 13:37:11.670935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.140 [2024-12-15 13:37:11.674169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.140 [2024-12-15 13:37:11.674224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.140 [2024-12-15 13:37:11.674235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.141 [2024-12-15 13:37:11.677102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.141 [2024-12-15 13:37:11.677147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.141 [2024-12-15 13:37:11.677158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.141 [2024-12-15 13:37:11.680244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.141 [2024-12-15 13:37:11.680291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.141 [2024-12-15 13:37:11.680303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.141 [2024-12-15 13:37:11.683647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.141 [2024-12-15 13:37:11.683693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.141 [2024-12-15 13:37:11.683703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.141 [2024-12-15 13:37:11.687227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.141 [2024-12-15 13:37:11.687274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.141 [2024-12-15 13:37:11.687286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.141 [2024-12-15 13:37:11.691117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.141 [2024-12-15 13:37:11.691165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.141 [2024-12-15 13:37:11.691176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.141 [2024-12-15 13:37:11.694352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.141 [2024-12-15 13:37:11.694401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.141 [2024-12-15 13:37:11.694413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.141 [2024-12-15 13:37:11.698117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.141 [2024-12-15 13:37:11.698171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.141 [2024-12-15 13:37:11.698182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.141 [2024-12-15 13:37:11.701439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.141 [2024-12-15 13:37:11.701485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.141 [2024-12-15 13:37:11.701496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.141 [2024-12-15 13:37:11.704220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.141 [2024-12-15 13:37:11.704266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.141 [2024-12-15 13:37:11.704278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.141 [2024-12-15 13:37:11.707435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.141 [2024-12-15 13:37:11.707483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.141 [2024-12-15 13:37:11.707494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.141 [2024-12-15 13:37:11.710455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.141 [2024-12-15 13:37:11.710503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.141 [2024-12-15 13:37:11.710514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.141 [2024-12-15 13:37:11.713641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.141 [2024-12-15 13:37:11.713690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.141 [2024-12-15 13:37:11.713702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.141 [2024-12-15 13:37:11.716521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.141 [2024-12-15 13:37:11.716567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.141 [2024-12-15 13:37:11.716578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.141 [2024-12-15 13:37:11.719917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.141 [2024-12-15 13:37:11.719977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.141 [2024-12-15 13:37:11.719988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.141 [2024-12-15 13:37:11.723535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.141 [2024-12-15 13:37:11.723584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.141 [2024-12-15 13:37:11.723596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.141 [2024-12-15 13:37:11.726828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.141 [2024-12-15 13:37:11.726875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.141 [2024-12-15 13:37:11.726886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.141 [2024-12-15 13:37:11.730276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.141 [2024-12-15 13:37:11.730324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.141 [2024-12-15 13:37:11.730335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.141 [2024-12-15 13:37:11.733246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.141 [2024-12-15 13:37:11.733291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.141 [2024-12-15 13:37:11.733303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.141 [2024-12-15 13:37:11.736459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.141 [2024-12-15 13:37:11.736504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.141 [2024-12-15 13:37:11.736515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.141 [2024-12-15 13:37:11.739535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.141 [2024-12-15 13:37:11.739581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.141 [2024-12-15 13:37:11.739592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.141 [2024-12-15 13:37:11.743021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.141 [2024-12-15 13:37:11.743068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.141 [2024-12-15 13:37:11.743079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.141 [2024-12-15 13:37:11.746057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.141 [2024-12-15 13:37:11.746105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.141 [2024-12-15 13:37:11.746116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.141 [2024-12-15 13:37:11.749009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.141 [2024-12-15 13:37:11.749054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.141 [2024-12-15 13:37:11.749066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.141 [2024-12-15 13:37:11.751777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.142 [2024-12-15 13:37:11.751823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.142 [2024-12-15 13:37:11.751835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.142 [2024-12-15 13:37:11.754980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.142 [2024-12-15 13:37:11.755028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.142 [2024-12-15 13:37:11.755039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.142 [2024-12-15 13:37:11.758513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.142 [2024-12-15 13:37:11.758569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.142 [2024-12-15 13:37:11.758581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.142 [2024-12-15 13:37:11.761618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.142 [2024-12-15 13:37:11.761650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.142 [2024-12-15 13:37:11.761662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.142 [2024-12-15 13:37:11.764610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.142 [2024-12-15 13:37:11.764660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.142 [2024-12-15 13:37:11.764671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.142 [2024-12-15 13:37:11.767319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.142 [2024-12-15 13:37:11.767365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.142 [2024-12-15 13:37:11.767376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.142 [2024-12-15 13:37:11.770412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.142 [2024-12-15 13:37:11.770459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.142 [2024-12-15 13:37:11.770470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.142 [2024-12-15 13:37:11.773797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.142 [2024-12-15 13:37:11.773830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.142 [2024-12-15 13:37:11.773841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.142 [2024-12-15 13:37:11.777025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.142 [2024-12-15 13:37:11.777071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.142 [2024-12-15 13:37:11.777082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.142 [2024-12-15 13:37:11.779883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.142 [2024-12-15 13:37:11.779930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.142 [2024-12-15 13:37:11.779941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.142 [2024-12-15 13:37:11.783397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.142 [2024-12-15 13:37:11.783445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.142 [2024-12-15 13:37:11.783456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.142 [2024-12-15 13:37:11.786873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.142 [2024-12-15 13:37:11.786920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.142 [2024-12-15 13:37:11.786932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.142 [2024-12-15 13:37:11.790261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.142 [2024-12-15 13:37:11.790308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.142 [2024-12-15 13:37:11.790320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.142 [2024-12-15 13:37:11.793104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.142 [2024-12-15 13:37:11.793150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.142 [2024-12-15 13:37:11.793162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.142 [2024-12-15 13:37:11.796320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.142 [2024-12-15 13:37:11.796366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.142 [2024-12-15 13:37:11.796377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.142 [2024-12-15 13:37:11.799334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.142 [2024-12-15 13:37:11.799382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.142 [2024-12-15 13:37:11.799393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.142 [2024-12-15 13:37:11.802965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.142 [2024-12-15 13:37:11.803028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.142 [2024-12-15 13:37:11.803039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.142 [2024-12-15 13:37:11.806313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.142 [2024-12-15 13:37:11.806360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.142 [2024-12-15 13:37:11.806372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.142 [2024-12-15 13:37:11.809612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.142 [2024-12-15 13:37:11.809659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.142 [2024-12-15 13:37:11.809671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.142 [2024-12-15 13:37:11.813148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.142 [2024-12-15 13:37:11.813195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.142 [2024-12-15 13:37:11.813206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.142 [2024-12-15 13:37:11.816716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.142 [2024-12-15 13:37:11.816760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.142 [2024-12-15 13:37:11.816770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.142 [2024-12-15 13:37:11.819463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.142 [2024-12-15 13:37:11.819510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.142 [2024-12-15 13:37:11.819521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.142 [2024-12-15 13:37:11.822922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.142 [2024-12-15 13:37:11.822971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.143 [2024-12-15 13:37:11.823000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.143 [2024-12-15 13:37:11.825453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.143 [2024-12-15 13:37:11.825499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.143 [2024-12-15 13:37:11.825510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.403 [2024-12-15 13:37:11.828695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.403 [2024-12-15 13:37:11.828741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.403 [2024-12-15 13:37:11.828753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.403 [2024-12-15 13:37:11.832219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.403 [2024-12-15 13:37:11.832265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.403 [2024-12-15 13:37:11.832276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.403 [2024-12-15 13:37:11.835102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.403 [2024-12-15 13:37:11.835149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.403 [2024-12-15 13:37:11.835161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.403 [2024-12-15 13:37:11.838407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.403 [2024-12-15 13:37:11.838454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.403 [2024-12-15 13:37:11.838466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.403 [2024-12-15 13:37:11.841678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.403 [2024-12-15 13:37:11.841727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.403 [2024-12-15 13:37:11.841739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.404 [2024-12-15 13:37:11.844916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.404 [2024-12-15 13:37:11.844975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.404 [2024-12-15 13:37:11.844987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.404 [2024-12-15 13:37:11.847908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.404 [2024-12-15 13:37:11.847952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.404 [2024-12-15 13:37:11.847964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.404 [2024-12-15 13:37:11.851163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.404 [2024-12-15 13:37:11.851212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.404 [2024-12-15 13:37:11.851223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.404 [2024-12-15 13:37:11.854108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.404 [2024-12-15 13:37:11.854156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.404 [2024-12-15 13:37:11.854168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.404 [2024-12-15 13:37:11.856777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.404 [2024-12-15 13:37:11.856823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.404 [2024-12-15 13:37:11.856835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.404 [2024-12-15 13:37:11.860187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.404 [2024-12-15 13:37:11.860232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.404 [2024-12-15 13:37:11.860243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.404 [2024-12-15 13:37:11.864095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.404 [2024-12-15 13:37:11.864144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.404 [2024-12-15 13:37:11.864156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.404 [2024-12-15 13:37:11.867736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.404 [2024-12-15 13:37:11.867784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.404 [2024-12-15 13:37:11.867795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.404 [2024-12-15 13:37:11.870831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.404 [2024-12-15 13:37:11.870879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.404 [2024-12-15 13:37:11.870891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.404 [2024-12-15 13:37:11.874123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.404 [2024-12-15 13:37:11.874170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.404 [2024-12-15 13:37:11.874181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.404 [2024-12-15 13:37:11.877545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.404 [2024-12-15 13:37:11.877608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.404 [2024-12-15 13:37:11.877621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.404 [2024-12-15 13:37:11.880824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.404 [2024-12-15 13:37:11.880871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.404 [2024-12-15 13:37:11.880882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.404 [2024-12-15 13:37:11.884329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.404 [2024-12-15 13:37:11.884372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.404 [2024-12-15 13:37:11.884384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.404 [2024-12-15 13:37:11.887430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.404 [2024-12-15 13:37:11.887477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.404 [2024-12-15 13:37:11.887488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.404 [2024-12-15 13:37:11.891331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.404 [2024-12-15 13:37:11.891379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.404 [2024-12-15 13:37:11.891390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.404 [2024-12-15 13:37:11.894689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.404 [2024-12-15 13:37:11.894736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.404 [2024-12-15 13:37:11.894747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.404 [2024-12-15 13:37:11.897816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.404 [2024-12-15 13:37:11.897850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.404 [2024-12-15 13:37:11.897862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.404 [2024-12-15 13:37:11.900477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.404 [2024-12-15 13:37:11.900522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.404 [2024-12-15 13:37:11.900534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.404 [2024-12-15 13:37:11.903770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.404 [2024-12-15 13:37:11.903819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.404 [2024-12-15 13:37:11.903830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.404 [2024-12-15 13:37:11.907110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.404 [2024-12-15 13:37:11.907159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.404 [2024-12-15 13:37:11.907171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.404 [2024-12-15 13:37:11.910250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.404 [2024-12-15 13:37:11.910296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.404 [2024-12-15 13:37:11.910307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.404 [2024-12-15 13:37:11.913343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.404 [2024-12-15 13:37:11.913387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.404 [2024-12-15 13:37:11.913398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.404 [2024-12-15 13:37:11.916999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.404 [2024-12-15 13:37:11.917045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.404 [2024-12-15 13:37:11.917056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.404 [2024-12-15 13:37:11.920310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.405 [2024-12-15 13:37:11.920357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.405 [2024-12-15 13:37:11.920368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.405 [2024-12-15 13:37:11.923192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.405 [2024-12-15 13:37:11.923238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.405 [2024-12-15 13:37:11.923250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.405 [2024-12-15 13:37:11.926189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.405 [2024-12-15 13:37:11.926238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.405 [2024-12-15 13:37:11.926249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.405 [2024-12-15 13:37:11.929497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.405 [2024-12-15 13:37:11.929586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.405 [2024-12-15 13:37:11.929611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.405 [2024-12-15 13:37:11.933792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.405 [2024-12-15 13:37:11.933826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.405 [2024-12-15 13:37:11.933839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.405 [2024-12-15 13:37:11.937258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.405 [2024-12-15 13:37:11.937303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.405 [2024-12-15 13:37:11.937315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.405 [2024-12-15 13:37:11.941300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.405 [2024-12-15 13:37:11.941346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.405 [2024-12-15 13:37:11.941357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.405 [2024-12-15 13:37:11.945974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.405 [2024-12-15 13:37:11.946040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.405 [2024-12-15 13:37:11.946051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.405 [2024-12-15 13:37:11.949393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.405 [2024-12-15 13:37:11.949440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.405 [2024-12-15 13:37:11.949451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.405 [2024-12-15 13:37:11.952814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.405 [2024-12-15 13:37:11.952861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.405 [2024-12-15 13:37:11.952872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.405 [2024-12-15 13:37:11.956349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.405 [2024-12-15 13:37:11.956396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.405 [2024-12-15 13:37:11.956407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.405 [2024-12-15 13:37:11.959884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.405 [2024-12-15 13:37:11.959935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.405 [2024-12-15 13:37:11.959947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.405 [2024-12-15 13:37:11.963546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.405 [2024-12-15 13:37:11.963596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.405 [2024-12-15 13:37:11.963633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.405 [2024-12-15 13:37:11.967463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.405 [2024-12-15 13:37:11.967512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.405 [2024-12-15 13:37:11.967523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.405 [2024-12-15 13:37:11.970762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.405 [2024-12-15 13:37:11.970809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.405 [2024-12-15 13:37:11.970821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.405 [2024-12-15 13:37:11.975039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.405 [2024-12-15 13:37:11.975087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.405 [2024-12-15 13:37:11.975099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.405 [2024-12-15 13:37:11.977829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.405 [2024-12-15 13:37:11.977863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.405 [2024-12-15 13:37:11.977876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.405 [2024-12-15 13:37:11.982033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.405 [2024-12-15 13:37:11.982083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.405 [2024-12-15 13:37:11.982094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.405 [2024-12-15 13:37:11.985474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.405 [2024-12-15 13:37:11.985522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.405 [2024-12-15 13:37:11.985534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.405 [2024-12-15 13:37:11.988800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.405 [2024-12-15 13:37:11.988849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.405 [2024-12-15 13:37:11.988861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.405 [2024-12-15 13:37:11.992117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.405 [2024-12-15 13:37:11.992166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.405 [2024-12-15 13:37:11.992177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.405 [2024-12-15 13:37:11.995345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.405 [2024-12-15 13:37:11.995393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.405 [2024-12-15 13:37:11.995404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.405 [2024-12-15 13:37:11.998455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.405 [2024-12-15 13:37:11.998503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.405 [2024-12-15 13:37:11.998515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.405 [2024-12-15 13:37:12.001999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.405 [2024-12-15 13:37:12.002036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.406 [2024-12-15 13:37:12.002069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.406 [2024-12-15 13:37:12.005258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.406 [2024-12-15 13:37:12.005305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.406 [2024-12-15 13:37:12.005317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.406 [2024-12-15 13:37:12.008805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.406 [2024-12-15 13:37:12.008852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.406 [2024-12-15 13:37:12.008865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.406 [2024-12-15 13:37:12.012318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.406 [2024-12-15 13:37:12.012366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.406 [2024-12-15 13:37:12.012378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.406 [2024-12-15 13:37:12.015948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.406 [2024-12-15 13:37:12.015996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.406 [2024-12-15 13:37:12.016024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.406 [2024-12-15 13:37:12.019204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.406 [2024-12-15 13:37:12.019254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.406 [2024-12-15 13:37:12.019265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.406 [2024-12-15 13:37:12.022711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.406 [2024-12-15 13:37:12.022758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.406 [2024-12-15 13:37:12.022770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.406 [2024-12-15 13:37:12.026448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.406 [2024-12-15 13:37:12.026495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.406 [2024-12-15 13:37:12.026507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.406 [2024-12-15 13:37:12.029848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.406 [2024-12-15 13:37:12.029898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.406 [2024-12-15 13:37:12.029910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.406 [2024-12-15 13:37:12.033479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.406 [2024-12-15 13:37:12.033527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.406 [2024-12-15 13:37:12.033539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.406 [2024-12-15 13:37:12.036966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.406 [2024-12-15 13:37:12.037030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.406 [2024-12-15 13:37:12.037041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.406 [2024-12-15 13:37:12.040236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.406 [2024-12-15 13:37:12.040283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.406 [2024-12-15 13:37:12.040294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.406 [2024-12-15 13:37:12.043330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.406 [2024-12-15 13:37:12.043353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.406 [2024-12-15 13:37:12.043364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.406 [2024-12-15 13:37:12.046749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.406 [2024-12-15 13:37:12.046796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.406 [2024-12-15 13:37:12.046808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.406 [2024-12-15 13:37:12.050289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.406 [2024-12-15 13:37:12.050338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.406 [2024-12-15 13:37:12.050349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.406 [2024-12-15 13:37:12.053329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.406 [2024-12-15 13:37:12.053377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.406 [2024-12-15 13:37:12.053388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.406 [2024-12-15 13:37:12.056549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.406 [2024-12-15 13:37:12.056596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.406 [2024-12-15 13:37:12.056635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.406 [2024-12-15 13:37:12.059607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.406 [2024-12-15 13:37:12.059662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.406 [2024-12-15 13:37:12.059674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.406 [2024-12-15 13:37:12.063295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.406 [2024-12-15 13:37:12.063326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.406 [2024-12-15 13:37:12.063338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.406 [2024-12-15 13:37:12.066739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.406 [2024-12-15 13:37:12.066789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.406 [2024-12-15 13:37:12.066800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.406 [2024-12-15 13:37:12.069669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.406 [2024-12-15 13:37:12.069703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.406 [2024-12-15 13:37:12.069715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.406 [2024-12-15 13:37:12.072643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.406 [2024-12-15 13:37:12.072690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.406 [2024-12-15 13:37:12.072702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.406 [2024-12-15 13:37:12.075915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.406 [2024-12-15 13:37:12.075962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.406 [2024-12-15 13:37:12.075974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.406 [2024-12-15 13:37:12.079140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.406 [2024-12-15 13:37:12.079194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.406 [2024-12-15 13:37:12.079206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.406 [2024-12-15 13:37:12.082922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.407 [2024-12-15 13:37:12.082984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.407 [2024-12-15 13:37:12.082995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.407 [2024-12-15 13:37:12.085426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.407 [2024-12-15 13:37:12.085473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.407 [2024-12-15 13:37:12.085484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.407 [2024-12-15 13:37:12.089056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.407 [2024-12-15 13:37:12.089112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.407 [2024-12-15 13:37:12.089124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.667 [2024-12-15 13:37:12.091964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.667 [2024-12-15 13:37:12.092010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.667 [2024-12-15 13:37:12.092021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.667 [2024-12-15 13:37:12.095607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.667 [2024-12-15 13:37:12.095652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.667 [2024-12-15 13:37:12.095664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.667 [2024-12-15 13:37:12.099190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.667 [2024-12-15 13:37:12.099235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.667 [2024-12-15 13:37:12.099246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.667 [2024-12-15 13:37:12.103490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.667 [2024-12-15 13:37:12.103540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.667 [2024-12-15 13:37:12.103551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.667 [2024-12-15 13:37:12.107377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.667 [2024-12-15 13:37:12.107427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.667 [2024-12-15 13:37:12.107438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.667 [2024-12-15 13:37:12.110906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.667 [2024-12-15 13:37:12.110955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.667 [2024-12-15 13:37:12.110967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.667 [2024-12-15 13:37:12.114080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.667 [2024-12-15 13:37:12.114129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.667 [2024-12-15 13:37:12.114141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.667 [2024-12-15 13:37:12.116914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.667 [2024-12-15 13:37:12.116962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.667 [2024-12-15 13:37:12.116974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.667 [2024-12-15 13:37:12.120515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.667 [2024-12-15 13:37:12.120562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.667 [2024-12-15 13:37:12.120574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.667 [2024-12-15 13:37:12.123687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.668 [2024-12-15 13:37:12.123735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.668 [2024-12-15 13:37:12.123746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.668 [2024-12-15 13:37:12.126835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.668 [2024-12-15 13:37:12.126880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.668 [2024-12-15 13:37:12.126893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.668 [2024-12-15 13:37:12.130578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.668 [2024-12-15 13:37:12.130636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.668 [2024-12-15 13:37:12.130647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.668 [2024-12-15 13:37:12.133718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.668 [2024-12-15 13:37:12.133751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.668 [2024-12-15 13:37:12.133762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.668 [2024-12-15 13:37:12.137184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.668 [2024-12-15 13:37:12.137230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.668 [2024-12-15 13:37:12.137241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.668 [2024-12-15 13:37:12.140037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.668 [2024-12-15 13:37:12.140082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.668 [2024-12-15 13:37:12.140093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.668 [2024-12-15 13:37:12.143362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.668 [2024-12-15 13:37:12.143409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.668 [2024-12-15 13:37:12.143420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.668 [2024-12-15 13:37:12.146180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.668 [2024-12-15 13:37:12.146227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.668 [2024-12-15 13:37:12.146238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.668 [2024-12-15 13:37:12.149411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.668 [2024-12-15 13:37:12.149458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.668 [2024-12-15 13:37:12.149469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.668 [2024-12-15 13:37:12.152656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.668 [2024-12-15 13:37:12.152701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.668 [2024-12-15 13:37:12.152712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.668 [2024-12-15 13:37:12.156216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.668 [2024-12-15 13:37:12.156263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.668 [2024-12-15 13:37:12.156274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.668 [2024-12-15 13:37:12.158907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.668 [2024-12-15 13:37:12.158956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.668 [2024-12-15 13:37:12.158968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.668 [2024-12-15 13:37:12.162646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.668 [2024-12-15 13:37:12.162694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.668 [2024-12-15 13:37:12.162706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.668 [2024-12-15 13:37:12.165360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.668 [2024-12-15 13:37:12.165407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.668 [2024-12-15 13:37:12.165419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.668 [2024-12-15 13:37:12.168357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.668 [2024-12-15 13:37:12.168403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.668 [2024-12-15 13:37:12.168415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.668 [2024-12-15 13:37:12.171405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.668 [2024-12-15 13:37:12.171439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.668 [2024-12-15 13:37:12.171450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.668 [2024-12-15 13:37:12.174654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.668 [2024-12-15 13:37:12.174685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.668 [2024-12-15 13:37:12.174697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.668 [2024-12-15 13:37:12.178183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.668 [2024-12-15 13:37:12.178217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.668 [2024-12-15 13:37:12.178228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.668 [2024-12-15 13:37:12.181233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.668 [2024-12-15 13:37:12.181279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.668 [2024-12-15 13:37:12.181290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.668 [2024-12-15 13:37:12.184458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.668 [2024-12-15 13:37:12.184503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.668 [2024-12-15 13:37:12.184515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.668 [2024-12-15 13:37:12.187779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.668 [2024-12-15 13:37:12.187826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.668 [2024-12-15 13:37:12.187837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.668 [2024-12-15 13:37:12.190772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.668 [2024-12-15 13:37:12.190818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.668 [2024-12-15 13:37:12.190830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.668 [2024-12-15 13:37:12.194057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.668 [2024-12-15 13:37:12.194105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.668 [2024-12-15 13:37:12.194117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.668 [2024-12-15 13:37:12.196957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.668 [2024-12-15 13:37:12.197003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.669 [2024-12-15 13:37:12.197015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.669 [2024-12-15 13:37:12.200219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.669 [2024-12-15 13:37:12.200264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.669 [2024-12-15 13:37:12.200275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.669 [2024-12-15 13:37:12.203250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.669 [2024-12-15 13:37:12.203298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.669 [2024-12-15 13:37:12.203309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.669 [2024-12-15 13:37:12.206523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.669 [2024-12-15 13:37:12.206572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.669 [2024-12-15 13:37:12.206583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.669 [2024-12-15 13:37:12.209401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.669 [2024-12-15 13:37:12.209447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.669 [2024-12-15 13:37:12.209459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.669 [2024-12-15 13:37:12.212703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.669 [2024-12-15 13:37:12.212748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.669 [2024-12-15 13:37:12.212760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.669 [2024-12-15 13:37:12.216105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.669 [2024-12-15 13:37:12.216154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.669 [2024-12-15 13:37:12.216165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:06.669 [2024-12-15 13:37:12.219081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.669 [2024-12-15 13:37:12.219131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.669 [2024-12-15 13:37:12.219142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.669 [2024-12-15 13:37:12.222222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.669 [2024-12-15 13:37:12.222270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.669 [2024-12-15 13:37:12.222282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.669 [2024-12-15 13:37:12.225383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20e1d10) 00:23:06.669 [2024-12-15 13:37:12.225429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:06.669 [2024-12-15 13:37:12.225441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:06.669 00:23:06.669 Latency(us) 00:23:06.669 [2024-12-15T13:37:12.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.669 [2024-12-15T13:37:12.359Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:06.669 nvme0n1 : 2.00 9361.39 1170.17 0.00 0.00 1705.99 502.69 9592.09 00:23:06.669 [2024-12-15T13:37:12.359Z] =================================================================================================================== 00:23:06.669 [2024-12-15T13:37:12.359Z] Total : 9361.39 1170.17 0.00 0.00 1705.99 502.69 9592.09 00:23:06.669 0 00:23:06.669 13:37:12 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:06.669 13:37:12 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:06.669 13:37:12 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:06.669 | .driver_specific 00:23:06.669 | .nvme_error 00:23:06.669 | .status_code 00:23:06.669 | .command_transient_transport_error' 00:23:06.669 13:37:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:06.927 13:37:12 -- host/digest.sh@71 -- # (( 604 > 0 )) 00:23:06.927 13:37:12 -- host/digest.sh@73 -- # killprocess 97825 00:23:06.927 13:37:12 -- common/autotest_common.sh@936 -- # '[' -z 97825 ']' 00:23:06.927 13:37:12 -- common/autotest_common.sh@940 -- # kill -0 97825 00:23:06.927 13:37:12 -- common/autotest_common.sh@941 -- # uname 00:23:06.928 13:37:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:06.928 13:37:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97825 00:23:06.928 13:37:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:06.928 13:37:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:06.928 killing process with pid 97825 00:23:06.928 Received shutdown signal, test time was about 2.000000 seconds 00:23:06.928 00:23:06.928 Latency(us) 00:23:06.928 [2024-12-15T13:37:12.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.928 [2024-12-15T13:37:12.618Z] =================================================================================================================== 00:23:06.928 [2024-12-15T13:37:12.618Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:06.928 13:37:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97825' 00:23:06.928 13:37:12 -- common/autotest_common.sh@955 -- # kill 97825 00:23:06.928 13:37:12 -- common/autotest_common.sh@960 -- # wait 97825 00:23:07.186 13:37:12 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:23:07.186 13:37:12 -- host/digest.sh@54 -- # local rw bs qd 00:23:07.186 13:37:12 -- host/digest.sh@56 -- # rw=randwrite 00:23:07.186 13:37:12 -- host/digest.sh@56 -- # bs=4096 00:23:07.186 13:37:12 -- host/digest.sh@56 -- # qd=128 00:23:07.186 13:37:12 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:23:07.186 13:37:12 -- host/digest.sh@58 -- # bperfpid=97917 00:23:07.186 13:37:12 -- host/digest.sh@60 -- # waitforlisten 97917 /var/tmp/bperf.sock 00:23:07.186 13:37:12 -- common/autotest_common.sh@829 -- # '[' -z 97917 ']' 00:23:07.186 13:37:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:07.186 13:37:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:07.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:07.186 13:37:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:07.186 13:37:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:07.186 13:37:12 -- common/autotest_common.sh@10 -- # set +x 00:23:07.186 [2024-12-15 13:37:12.782630] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:07.186 [2024-12-15 13:37:12.782731] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97917 ] 00:23:07.444 [2024-12-15 13:37:12.914423] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.444 [2024-12-15 13:37:12.972918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.380 13:37:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:08.380 13:37:13 -- common/autotest_common.sh@862 -- # return 0 00:23:08.380 13:37:13 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:08.380 13:37:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:08.380 13:37:14 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:08.380 13:37:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.380 13:37:14 -- common/autotest_common.sh@10 -- # set +x 00:23:08.380 13:37:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.380 13:37:14 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:08.380 13:37:14 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:08.948 nvme0n1 00:23:08.948 13:37:14 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:08.948 13:37:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.948 13:37:14 -- common/autotest_common.sh@10 -- # set +x 00:23:08.948 13:37:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.948 13:37:14 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:08.948 13:37:14 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:08.948 Running I/O for 2 seconds... 00:23:08.948 [2024-12-15 13:37:14.520600] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f6890 00:23:08.948 [2024-12-15 13:37:14.521162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.948 [2024-12-15 13:37:14.521190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:08.948 [2024-12-15 13:37:14.533216] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e5a90 00:23:08.948 [2024-12-15 13:37:14.534325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.948 [2024-12-15 13:37:14.534354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:08.948 [2024-12-15 13:37:14.540679] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e1b48 00:23:08.948 [2024-12-15 13:37:14.540829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.948 [2024-12-15 13:37:14.540849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:08.948 [2024-12-15 13:37:14.552943] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f2948 00:23:08.948 [2024-12-15 13:37:14.553729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.948 [2024-12-15 13:37:14.553755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:08.948 [2024-12-15 13:37:14.561910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fbcf0 00:23:08.948 [2024-12-15 13:37:14.563039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.948 [2024-12-15 13:37:14.563069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:08.948 [2024-12-15 13:37:14.571422] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f35f0 00:23:08.948 [2024-12-15 13:37:14.571936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.948 [2024-12-15 13:37:14.571967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:08.948 [2024-12-15 13:37:14.582745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e27f0 00:23:08.948 [2024-12-15 13:37:14.583786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.948 [2024-12-15 13:37:14.583815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:08.948 [2024-12-15 13:37:14.589715] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e4de8 00:23:08.948 [2024-12-15 13:37:14.589984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.948 [2024-12-15 13:37:14.590003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:08.948 [2024-12-15 13:37:14.600709] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f6020 00:23:08.948 [2024-12-15 13:37:14.601412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.948 [2024-12-15 13:37:14.601437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:08.948 [2024-12-15 13:37:14.608578] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f0bc0 00:23:08.948 [2024-12-15 13:37:14.609716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.948 [2024-12-15 13:37:14.609748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:08.948 [2024-12-15 13:37:14.617784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190eaef0 00:23:08.948 [2024-12-15 13:37:14.618791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.948 [2024-12-15 13:37:14.618821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:08.948 [2024-12-15 13:37:14.626969] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f8a50 00:23:08.948 [2024-12-15 13:37:14.628030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.948 [2024-12-15 13:37:14.628060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:08.948 [2024-12-15 13:37:14.636223] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fc128 00:23:09.208 [2024-12-15 13:37:14.637427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.208 [2024-12-15 13:37:14.637471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:09.208 [2024-12-15 13:37:14.645425] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190df988 00:23:09.208 [2024-12-15 13:37:14.645819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.208 [2024-12-15 13:37:14.645843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:09.208 [2024-12-15 13:37:14.655969] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190de470 00:23:09.208 [2024-12-15 13:37:14.656439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.208 [2024-12-15 13:37:14.656488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:09.208 [2024-12-15 13:37:14.667645] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e1f80 00:23:09.208 [2024-12-15 13:37:14.669169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.208 [2024-12-15 13:37:14.669212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:09.208 [2024-12-15 13:37:14.678238] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fe2e8 00:23:09.208 [2024-12-15 13:37:14.678689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.208 [2024-12-15 13:37:14.678771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:09.208 [2024-12-15 13:37:14.688373] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f6458 00:23:09.208 [2024-12-15 13:37:14.689054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.208 [2024-12-15 13:37:14.689081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:09.208 [2024-12-15 13:37:14.696394] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e6300 00:23:09.208 [2024-12-15 13:37:14.696530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.208 [2024-12-15 13:37:14.696548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:09.208 [2024-12-15 13:37:14.707561] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fc560 00:23:09.208 [2024-12-15 13:37:14.708285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.208 [2024-12-15 13:37:14.708310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:09.208 [2024-12-15 13:37:14.716196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fc998 00:23:09.208 [2024-12-15 13:37:14.717362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.208 [2024-12-15 13:37:14.717405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:09.208 [2024-12-15 13:37:14.725697] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190ee190 00:23:09.208 [2024-12-15 13:37:14.726185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.208 [2024-12-15 13:37:14.726214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:09.208 [2024-12-15 13:37:14.735148] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e73e0 00:23:09.208 [2024-12-15 13:37:14.735747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.208 [2024-12-15 13:37:14.735772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:09.208 [2024-12-15 13:37:14.743309] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fa7d8 00:23:09.208 [2024-12-15 13:37:14.743404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.208 [2024-12-15 13:37:14.743423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:09.208 [2024-12-15 13:37:14.754471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f2d80 00:23:09.208 [2024-12-15 13:37:14.755051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.208 [2024-12-15 13:37:14.755071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:09.208 [2024-12-15 13:37:14.763670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e8d30 00:23:09.208 [2024-12-15 13:37:14.764393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.208 [2024-12-15 13:37:14.764419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:09.208 [2024-12-15 13:37:14.771501] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fdeb0 00:23:09.208 [2024-12-15 13:37:14.771645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.208 [2024-12-15 13:37:14.771665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:09.208 [2024-12-15 13:37:14.780646] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e5658 00:23:09.208 [2024-12-15 13:37:14.780955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.208 [2024-12-15 13:37:14.780978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:09.208 [2024-12-15 13:37:14.792406] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e49b0 00:23:09.208 [2024-12-15 13:37:14.793504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.209 [2024-12-15 13:37:14.793586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:09.209 [2024-12-15 13:37:14.799259] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e7818 00:23:09.209 [2024-12-15 13:37:14.799515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.209 [2024-12-15 13:37:14.799538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:09.209 [2024-12-15 13:37:14.810293] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190efae0 00:23:09.209 [2024-12-15 13:37:14.810993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.209 [2024-12-15 13:37:14.811020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:09.209 [2024-12-15 13:37:14.819266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f31b8 00:23:09.209 [2024-12-15 13:37:14.819975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.209 [2024-12-15 13:37:14.820021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:09.209 [2024-12-15 13:37:14.828733] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190eb328 00:23:09.209 [2024-12-15 13:37:14.829409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.209 [2024-12-15 13:37:14.829438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:09.209 [2024-12-15 13:37:14.836786] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190ef6a8 00:23:09.209 [2024-12-15 13:37:14.837528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.209 [2024-12-15 13:37:14.837579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:09.209 [2024-12-15 13:37:14.846036] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fe2e8 00:23:09.209 [2024-12-15 13:37:14.846446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.209 [2024-12-15 13:37:14.846474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:09.209 [2024-12-15 13:37:14.857309] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f46d0 00:23:09.209 [2024-12-15 13:37:14.858348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.209 [2024-12-15 13:37:14.858378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:09.209 [2024-12-15 13:37:14.864225] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190eee38 00:23:09.209 [2024-12-15 13:37:14.864398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.209 [2024-12-15 13:37:14.864416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:09.209 [2024-12-15 13:37:14.875309] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190df988 00:23:09.209 [2024-12-15 13:37:14.875947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.209 [2024-12-15 13:37:14.875972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:09.209 [2024-12-15 13:37:14.884356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e5ec8 00:23:09.209 [2024-12-15 13:37:14.884981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.209 [2024-12-15 13:37:14.885021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:09.209 [2024-12-15 13:37:14.893982] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e4140 00:23:09.209 [2024-12-15 13:37:14.894572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.209 [2024-12-15 13:37:14.894609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:09.469 [2024-12-15 13:37:14.903045] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e5a90 00:23:09.469 [2024-12-15 13:37:14.903659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.469 [2024-12-15 13:37:14.903679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:09.469 [2024-12-15 13:37:14.912221] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e38d0 00:23:09.469 [2024-12-15 13:37:14.913761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.469 [2024-12-15 13:37:14.913807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:09.469 [2024-12-15 13:37:14.923438] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f1ca0 00:23:09.469 [2024-12-15 13:37:14.924634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.469 [2024-12-15 13:37:14.924705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.469 [2024-12-15 13:37:14.930374] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f5be8 00:23:09.469 [2024-12-15 13:37:14.930728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.469 [2024-12-15 13:37:14.930752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:09.469 [2024-12-15 13:37:14.941509] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fe2e8 00:23:09.469 [2024-12-15 13:37:14.942471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.469 [2024-12-15 13:37:14.942501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:09.469 [2024-12-15 13:37:14.948475] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e0a68 00:23:09.469 [2024-12-15 13:37:14.948583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.469 [2024-12-15 13:37:14.948610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:09.469 [2024-12-15 13:37:14.959639] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fa7d8 00:23:09.469 [2024-12-15 13:37:14.960314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.469 [2024-12-15 13:37:14.960339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:09.469 [2024-12-15 13:37:14.968278] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190eea00 00:23:09.469 [2024-12-15 13:37:14.969446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.469 [2024-12-15 13:37:14.969484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:09.469 [2024-12-15 13:37:14.977514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e88f8 00:23:09.469 [2024-12-15 13:37:14.977870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.469 [2024-12-15 13:37:14.977910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:09.469 [2024-12-15 13:37:14.986686] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e3498 00:23:09.469 [2024-12-15 13:37:14.987017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.469 [2024-12-15 13:37:14.987040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:09.469 [2024-12-15 13:37:14.995826] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e88f8 00:23:09.469 [2024-12-15 13:37:14.996121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.469 [2024-12-15 13:37:14.996144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:09.469 [2024-12-15 13:37:15.004884] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e38d0 00:23:09.469 [2024-12-15 13:37:15.005165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.469 [2024-12-15 13:37:15.005189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:09.469 [2024-12-15 13:37:15.013901] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f7da8 00:23:09.469 [2024-12-15 13:37:15.014162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.469 [2024-12-15 13:37:15.014185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:09.469 [2024-12-15 13:37:15.023251] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f1430 00:23:09.469 [2024-12-15 13:37:15.023913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.469 [2024-12-15 13:37:15.023939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:09.469 [2024-12-15 13:37:15.032320] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f9b30 00:23:09.469 [2024-12-15 13:37:15.032981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.469 [2024-12-15 13:37:15.033006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:09.469 [2024-12-15 13:37:15.041489] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f1868 00:23:09.469 [2024-12-15 13:37:15.041930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.469 [2024-12-15 13:37:15.041957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:09.469 [2024-12-15 13:37:15.050519] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f9f68 00:23:09.469 [2024-12-15 13:37:15.050909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.469 [2024-12-15 13:37:15.050933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:09.469 [2024-12-15 13:37:15.059597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e0a68 00:23:09.469 [2024-12-15 13:37:15.059935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.469 [2024-12-15 13:37:15.059958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:09.469 [2024-12-15 13:37:15.068671] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fda78 00:23:09.469 [2024-12-15 13:37:15.068997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.470 [2024-12-15 13:37:15.069021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:09.470 [2024-12-15 13:37:15.077832] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fdeb0 00:23:09.470 [2024-12-15 13:37:15.078145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.470 [2024-12-15 13:37:15.078170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:09.470 [2024-12-15 13:37:15.087031] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190ec408 00:23:09.470 [2024-12-15 13:37:15.087288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.470 [2024-12-15 13:37:15.087332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:09.470 [2024-12-15 13:37:15.096113] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190df988 00:23:09.470 [2024-12-15 13:37:15.096339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.470 [2024-12-15 13:37:15.096363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:09.470 [2024-12-15 13:37:15.105204] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190ea248 00:23:09.470 [2024-12-15 13:37:15.105417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.470 [2024-12-15 13:37:15.105435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:09.470 [2024-12-15 13:37:15.114240] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fd208 00:23:09.470 [2024-12-15 13:37:15.114895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.470 [2024-12-15 13:37:15.114920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:09.470 [2024-12-15 13:37:15.123371] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e5658 00:23:09.470 [2024-12-15 13:37:15.124397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.470 [2024-12-15 13:37:15.124428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:09.470 [2024-12-15 13:37:15.132920] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e95a0 00:23:09.470 [2024-12-15 13:37:15.134133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.470 [2024-12-15 13:37:15.134176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:09.470 [2024-12-15 13:37:15.142251] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f9b30 00:23:09.470 [2024-12-15 13:37:15.143225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.470 [2024-12-15 13:37:15.143264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:09.470 [2024-12-15 13:37:15.151417] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190df988 00:23:09.470 [2024-12-15 13:37:15.152390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.470 [2024-12-15 13:37:15.152436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:09.729 [2024-12-15 13:37:15.160552] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190de8a8 00:23:09.729 [2024-12-15 13:37:15.161816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.729 [2024-12-15 13:37:15.161861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:09.729 [2024-12-15 13:37:15.169977] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f31b8 00:23:09.729 [2024-12-15 13:37:15.171127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.729 [2024-12-15 13:37:15.171194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:09.729 [2024-12-15 13:37:15.179228] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e5220 00:23:09.729 [2024-12-15 13:37:15.180318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.729 [2024-12-15 13:37:15.180347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:09.729 [2024-12-15 13:37:15.188354] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e0630 00:23:09.729 [2024-12-15 13:37:15.189434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.729 [2024-12-15 13:37:15.189477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:09.729 [2024-12-15 13:37:15.197434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e6b70 00:23:09.729 [2024-12-15 13:37:15.198508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.729 [2024-12-15 13:37:15.198537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:09.729 [2024-12-15 13:37:15.206672] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190de8a8 00:23:09.729 [2024-12-15 13:37:15.207505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.729 [2024-12-15 13:37:15.207527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:09.729 [2024-12-15 13:37:15.217671] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fc560 00:23:09.729 [2024-12-15 13:37:15.218472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.729 [2024-12-15 13:37:15.218508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:09.729 [2024-12-15 13:37:15.226650] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fc998 00:23:09.729 [2024-12-15 13:37:15.227906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.729 [2024-12-15 13:37:15.227950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:09.729 [2024-12-15 13:37:15.236185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e1710 00:23:09.729 [2024-12-15 13:37:15.236764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.729 [2024-12-15 13:37:15.236784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:09.729 [2024-12-15 13:37:15.245459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e3498 00:23:09.729 [2024-12-15 13:37:15.246163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.729 [2024-12-15 13:37:15.246189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:09.729 [2024-12-15 13:37:15.253437] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f0ff8 00:23:09.729 [2024-12-15 13:37:15.253641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.729 [2024-12-15 13:37:15.253660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:09.729 [2024-12-15 13:37:15.264639] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190edd58 00:23:09.729 [2024-12-15 13:37:15.265372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.729 [2024-12-15 13:37:15.265397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:09.729 [2024-12-15 13:37:15.273394] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fe720 00:23:09.729 [2024-12-15 13:37:15.274681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.729 [2024-12-15 13:37:15.274723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:09.729 [2024-12-15 13:37:15.282760] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190ee5c8 00:23:09.729 [2024-12-15 13:37:15.283109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.729 [2024-12-15 13:37:15.283153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:09.729 [2024-12-15 13:37:15.292164] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e38d0 00:23:09.729 [2024-12-15 13:37:15.292885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.729 [2024-12-15 13:37:15.292910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:09.729 [2024-12-15 13:37:15.300324] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190eaab8 00:23:09.729 [2024-12-15 13:37:15.300551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.729 [2024-12-15 13:37:15.300574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:09.729 [2024-12-15 13:37:15.311497] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e4578 00:23:09.729 [2024-12-15 13:37:15.312348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.729 [2024-12-15 13:37:15.312376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:09.729 [2024-12-15 13:37:15.319717] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190ecc78 00:23:09.729 [2024-12-15 13:37:15.320808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.729 [2024-12-15 13:37:15.320868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:09.729 [2024-12-15 13:37:15.330310] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e0630 00:23:09.729 [2024-12-15 13:37:15.331154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.729 [2024-12-15 13:37:15.331183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:09.729 [2024-12-15 13:37:15.339029] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e01f8 00:23:09.729 [2024-12-15 13:37:15.340337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.729 [2024-12-15 13:37:15.340381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:09.730 [2024-12-15 13:37:15.348413] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e1b48 00:23:09.730 [2024-12-15 13:37:15.349050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.730 [2024-12-15 13:37:15.349074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:09.730 [2024-12-15 13:37:15.357643] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e38d0 00:23:09.730 [2024-12-15 13:37:15.358365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.730 [2024-12-15 13:37:15.358390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:09.730 [2024-12-15 13:37:15.365626] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190de470 00:23:09.730 [2024-12-15 13:37:15.365838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.730 [2024-12-15 13:37:15.365862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:09.730 [2024-12-15 13:37:15.376623] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e7818 00:23:09.730 [2024-12-15 13:37:15.377279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.730 [2024-12-15 13:37:15.377303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:09.730 [2024-12-15 13:37:15.384510] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e3060 00:23:09.730 [2024-12-15 13:37:15.385301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.730 [2024-12-15 13:37:15.385327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:09.730 [2024-12-15 13:37:15.395027] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e38d0 00:23:09.730 [2024-12-15 13:37:15.395691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.730 [2024-12-15 13:37:15.395716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:09.730 [2024-12-15 13:37:15.404856] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e7818 00:23:09.730 [2024-12-15 13:37:15.406874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.730 [2024-12-15 13:37:15.406909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:09.730 [2024-12-15 13:37:15.415666] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e01f8 00:23:09.730 [2024-12-15 13:37:15.417209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.730 [2024-12-15 13:37:15.417246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:09.989 [2024-12-15 13:37:15.426936] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190ef6a8 00:23:09.989 [2024-12-15 13:37:15.427191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.989 [2024-12-15 13:37:15.427217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:09.989 [2024-12-15 13:37:15.436647] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fb8b8 00:23:09.989 [2024-12-15 13:37:15.436870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.989 [2024-12-15 13:37:15.436893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:09.989 [2024-12-15 13:37:15.446243] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f5be8 00:23:09.989 [2024-12-15 13:37:15.446869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.989 [2024-12-15 13:37:15.446895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:09.989 [2024-12-15 13:37:15.455247] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e5658 00:23:09.989 [2024-12-15 13:37:15.455632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.989 [2024-12-15 13:37:15.455680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:09.989 [2024-12-15 13:37:15.464502] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e4578 00:23:09.989 [2024-12-15 13:37:15.464843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.989 [2024-12-15 13:37:15.464870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:09.989 [2024-12-15 13:37:15.473721] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190eff18 00:23:09.989 [2024-12-15 13:37:15.474069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.989 [2024-12-15 13:37:15.474093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:09.989 [2024-12-15 13:37:15.482895] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e6300 00:23:09.989 [2024-12-15 13:37:15.483177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.989 [2024-12-15 13:37:15.483201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:09.989 [2024-12-15 13:37:15.492381] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f4b08 00:23:09.989 [2024-12-15 13:37:15.492650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.989 [2024-12-15 13:37:15.492670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:09.989 [2024-12-15 13:37:15.501723] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fda78 00:23:09.989 [2024-12-15 13:37:15.501987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.989 [2024-12-15 13:37:15.502007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:09.989 [2024-12-15 13:37:15.510755] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f8618 00:23:09.989 [2024-12-15 13:37:15.510957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.989 [2024-12-15 13:37:15.510981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:09.989 [2024-12-15 13:37:15.519847] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190eea00 00:23:09.989 [2024-12-15 13:37:15.520108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.989 [2024-12-15 13:37:15.520133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:09.989 [2024-12-15 13:37:15.530420] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190eaab8 00:23:09.989 [2024-12-15 13:37:15.531218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.989 [2024-12-15 13:37:15.531246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:09.989 [2024-12-15 13:37:15.538678] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f0bc0 00:23:09.989 [2024-12-15 13:37:15.538992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.989 [2024-12-15 13:37:15.539026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:09.989 [2024-12-15 13:37:15.549771] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e3d08 00:23:09.989 [2024-12-15 13:37:15.550553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.989 [2024-12-15 13:37:15.550583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.989 [2024-12-15 13:37:15.559675] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e6fa8 00:23:09.989 [2024-12-15 13:37:15.560548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.989 [2024-12-15 13:37:15.560578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.989 [2024-12-15 13:37:15.568886] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fe720 00:23:09.989 [2024-12-15 13:37:15.570305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.989 [2024-12-15 13:37:15.570353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.989 [2024-12-15 13:37:15.579824] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fe720 00:23:09.989 [2024-12-15 13:37:15.581111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.989 [2024-12-15 13:37:15.581156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.989 [2024-12-15 13:37:15.589201] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f1868 00:23:09.989 [2024-12-15 13:37:15.590514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.989 [2024-12-15 13:37:15.590561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:09.989 [2024-12-15 13:37:15.600112] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e4578 00:23:09.989 [2024-12-15 13:37:15.601392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.989 [2024-12-15 13:37:15.601440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:09.989 [2024-12-15 13:37:15.611909] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f4b08 00:23:09.989 [2024-12-15 13:37:15.612786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.989 [2024-12-15 13:37:15.612815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:09.989 [2024-12-15 13:37:15.620581] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fb8b8 00:23:09.989 [2024-12-15 13:37:15.621584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.989 [2024-12-15 13:37:15.621622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:09.989 [2024-12-15 13:37:15.630163] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190de8a8 00:23:09.989 [2024-12-15 13:37:15.631592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.989 [2024-12-15 13:37:15.631679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:09.989 [2024-12-15 13:37:15.640683] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190edd58 00:23:09.989 [2024-12-15 13:37:15.642265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.989 [2024-12-15 13:37:15.642313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:09.989 [2024-12-15 13:37:15.650156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f0788 00:23:09.989 [2024-12-15 13:37:15.650978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.989 [2024-12-15 13:37:15.651023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:09.989 [2024-12-15 13:37:15.659801] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190ed0b0 00:23:09.989 [2024-12-15 13:37:15.661111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.989 [2024-12-15 13:37:15.661156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:09.989 [2024-12-15 13:37:15.671620] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190ed0b0 00:23:09.989 [2024-12-15 13:37:15.672798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.989 [2024-12-15 13:37:15.672830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:10.249 [2024-12-15 13:37:15.679689] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f1430 00:23:10.249 [2024-12-15 13:37:15.679896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.249 [2024-12-15 13:37:15.679922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:10.249 [2024-12-15 13:37:15.693357] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190ed920 00:23:10.249 [2024-12-15 13:37:15.694251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.249 [2024-12-15 13:37:15.694281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:10.249 [2024-12-15 13:37:15.703419] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f2d80 00:23:10.249 [2024-12-15 13:37:15.704996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.249 [2024-12-15 13:37:15.705039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:10.249 [2024-12-15 13:37:15.713619] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f1430 00:23:10.249 [2024-12-15 13:37:15.715035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.249 [2024-12-15 13:37:15.715080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:10.249 [2024-12-15 13:37:15.725400] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190eaab8 00:23:10.249 [2024-12-15 13:37:15.726469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.249 [2024-12-15 13:37:15.726509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:10.249 [2024-12-15 13:37:15.734106] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190ec840 00:23:10.249 [2024-12-15 13:37:15.735195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.249 [2024-12-15 13:37:15.735240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:10.249 [2024-12-15 13:37:15.744134] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e7818 00:23:10.249 [2024-12-15 13:37:15.745676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.249 [2024-12-15 13:37:15.745723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:10.249 [2024-12-15 13:37:15.754641] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190ea248 00:23:10.249 [2024-12-15 13:37:15.756113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.249 [2024-12-15 13:37:15.756158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:10.249 [2024-12-15 13:37:15.765499] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e9168 00:23:10.249 [2024-12-15 13:37:15.766541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.249 [2024-12-15 13:37:15.766570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:10.249 [2024-12-15 13:37:15.772629] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f57b0 00:23:10.249 [2024-12-15 13:37:15.772757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.249 [2024-12-15 13:37:15.772775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:10.249 [2024-12-15 13:37:15.783731] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f6cc8 00:23:10.249 [2024-12-15 13:37:15.784283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.249 [2024-12-15 13:37:15.784308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:10.249 [2024-12-15 13:37:15.792958] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fb480 00:23:10.249 [2024-12-15 13:37:15.793734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.249 [2024-12-15 13:37:15.793760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:10.249 [2024-12-15 13:37:15.800957] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e8088 00:23:10.249 [2024-12-15 13:37:15.801231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.249 [2024-12-15 13:37:15.801254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:10.249 [2024-12-15 13:37:15.812054] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e7c50 00:23:10.249 [2024-12-15 13:37:15.812773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.249 [2024-12-15 13:37:15.812797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:10.249 [2024-12-15 13:37:15.819937] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fcdd0 00:23:10.249 [2024-12-15 13:37:15.820794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.249 [2024-12-15 13:37:15.820823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:10.249 [2024-12-15 13:37:15.831154] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f3e60 00:23:10.249 [2024-12-15 13:37:15.832312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.249 [2024-12-15 13:37:15.832356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:10.249 [2024-12-15 13:37:15.838103] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e12d8 00:23:10.249 [2024-12-15 13:37:15.838388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.249 [2024-12-15 13:37:15.838411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:10.249 [2024-12-15 13:37:15.849161] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f6020 00:23:10.249 [2024-12-15 13:37:15.849982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.249 [2024-12-15 13:37:15.850011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:10.249 [2024-12-15 13:37:15.858550] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190df988 00:23:10.249 [2024-12-15 13:37:15.859305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.249 [2024-12-15 13:37:15.859330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:10.250 [2024-12-15 13:37:15.866533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fd640 00:23:10.250 [2024-12-15 13:37:15.867386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.250 [2024-12-15 13:37:15.867416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:10.250 [2024-12-15 13:37:15.875960] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fd640 00:23:10.250 [2024-12-15 13:37:15.876999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.250 [2024-12-15 13:37:15.877043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:10.250 [2024-12-15 13:37:15.885082] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fd640 00:23:10.250 [2024-12-15 13:37:15.886254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.250 [2024-12-15 13:37:15.886298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:10.250 [2024-12-15 13:37:15.894301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fd640 00:23:10.250 [2024-12-15 13:37:15.895686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.250 [2024-12-15 13:37:15.895730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:10.250 [2024-12-15 13:37:15.903643] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190de038 00:23:10.250 [2024-12-15 13:37:15.904029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.250 [2024-12-15 13:37:15.904073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:10.250 [2024-12-15 13:37:15.913456] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190ee5c8 00:23:10.250 [2024-12-15 13:37:15.913598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.250 [2024-12-15 13:37:15.913628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:10.250 [2024-12-15 13:37:15.922876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f35f0 00:23:10.250 [2024-12-15 13:37:15.923678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.250 [2024-12-15 13:37:15.923710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:10.250 [2024-12-15 13:37:15.933486] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f35f0 00:23:10.250 [2024-12-15 13:37:15.934219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.250 [2024-12-15 13:37:15.934250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:10.510 [2024-12-15 13:37:15.942340] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e1b48 00:23:10.510 [2024-12-15 13:37:15.943536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.510 [2024-12-15 13:37:15.943580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:10.510 [2024-12-15 13:37:15.951845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e3d08 00:23:10.510 [2024-12-15 13:37:15.952288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.510 [2024-12-15 13:37:15.952312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:10.510 [2024-12-15 13:37:15.963175] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fb8b8 00:23:10.510 [2024-12-15 13:37:15.964200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.510 [2024-12-15 13:37:15.964259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:10.510 [2024-12-15 13:37:15.970147] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fbcf0 00:23:10.510 [2024-12-15 13:37:15.970352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.510 [2024-12-15 13:37:15.970375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:10.510 [2024-12-15 13:37:15.981183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e88f8 00:23:10.510 [2024-12-15 13:37:15.981867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.510 [2024-12-15 13:37:15.981907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:10.510 [2024-12-15 13:37:15.989264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190eaef0 00:23:10.510 [2024-12-15 13:37:15.990250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.510 [2024-12-15 13:37:15.990281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:10.510 [2024-12-15 13:37:15.998643] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190dece0 00:23:10.510 [2024-12-15 13:37:15.999682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.510 [2024-12-15 13:37:15.999711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:10.510 [2024-12-15 13:37:16.008823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fe720 00:23:10.510 [2024-12-15 13:37:16.009520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.510 [2024-12-15 13:37:16.009545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.510 [2024-12-15 13:37:16.017947] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f0ff8 00:23:10.510 [2024-12-15 13:37:16.018656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.510 [2024-12-15 13:37:16.018692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:10.510 [2024-12-15 13:37:16.027467] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190edd58 00:23:10.510 [2024-12-15 13:37:16.028187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.511 [2024-12-15 13:37:16.028213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:10.511 [2024-12-15 13:37:16.037148] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e9168 00:23:10.511 [2024-12-15 13:37:16.038433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.511 [2024-12-15 13:37:16.038476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:10.511 [2024-12-15 13:37:16.047491] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e6fa8 00:23:10.511 [2024-12-15 13:37:16.048409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.511 [2024-12-15 13:37:16.048451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:10.511 [2024-12-15 13:37:16.056273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190ddc00 00:23:10.511 [2024-12-15 13:37:16.057600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.511 [2024-12-15 13:37:16.057653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:10.511 [2024-12-15 13:37:16.065456] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190ef270 00:23:10.511 [2024-12-15 13:37:16.065994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.511 [2024-12-15 13:37:16.066020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:10.511 [2024-12-15 13:37:16.074673] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f9b30 00:23:10.511 [2024-12-15 13:37:16.075168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.511 [2024-12-15 13:37:16.075192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:10.511 [2024-12-15 13:37:16.085023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f8e88 00:23:10.511 [2024-12-15 13:37:16.085629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.511 [2024-12-15 13:37:16.085662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:10.511 [2024-12-15 13:37:16.094176] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190df550 00:23:10.511 [2024-12-15 13:37:16.095188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.511 [2024-12-15 13:37:16.095234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:10.511 [2024-12-15 13:37:16.103406] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e6300 00:23:10.511 [2024-12-15 13:37:16.104389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.511 [2024-12-15 13:37:16.104433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:10.511 [2024-12-15 13:37:16.113850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f3a28 00:23:10.511 [2024-12-15 13:37:16.114711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.511 [2024-12-15 13:37:16.114755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:10.511 [2024-12-15 13:37:16.122151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e0630 00:23:10.511 [2024-12-15 13:37:16.123202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.511 [2024-12-15 13:37:16.123245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:10.511 [2024-12-15 13:37:16.130761] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fac10 00:23:10.511 [2024-12-15 13:37:16.130951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.511 [2024-12-15 13:37:16.130989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:10.511 [2024-12-15 13:37:16.139911] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fda78 00:23:10.511 [2024-12-15 13:37:16.140089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.511 [2024-12-15 13:37:16.140112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:10.511 [2024-12-15 13:37:16.148897] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e8d30 00:23:10.511 [2024-12-15 13:37:16.150053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.511 [2024-12-15 13:37:16.150101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:10.511 [2024-12-15 13:37:16.159544] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fa7d8 00:23:10.511 [2024-12-15 13:37:16.160150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.511 [2024-12-15 13:37:16.160179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:10.511 [2024-12-15 13:37:16.168409] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f7970 00:23:10.511 [2024-12-15 13:37:16.169951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.511 [2024-12-15 13:37:16.170001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:10.511 [2024-12-15 13:37:16.177323] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e5a90 00:23:10.511 [2024-12-15 13:37:16.178075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:52 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.511 [2024-12-15 13:37:16.178115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:10.511 [2024-12-15 13:37:16.185533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190ed920 00:23:10.511 [2024-12-15 13:37:16.185775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.511 [2024-12-15 13:37:16.185799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:10.511 [2024-12-15 13:37:16.196793] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e7818 00:23:10.511 [2024-12-15 13:37:16.197583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.511 [2024-12-15 13:37:16.197632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:10.771 [2024-12-15 13:37:16.205058] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190dece0 00:23:10.771 [2024-12-15 13:37:16.206087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.771 [2024-12-15 13:37:16.206130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:10.771 [2024-12-15 13:37:16.214269] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190dfdc0 00:23:10.771 [2024-12-15 13:37:16.214692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.771 [2024-12-15 13:37:16.214722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:10.771 [2024-12-15 13:37:16.225495] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f6458 00:23:10.771 [2024-12-15 13:37:16.226537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.771 [2024-12-15 13:37:16.226566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:10.771 [2024-12-15 13:37:16.232514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f0ff8 00:23:10.771 [2024-12-15 13:37:16.232698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.771 [2024-12-15 13:37:16.232721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:10.771 [2024-12-15 13:37:16.243821] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f4298 00:23:10.771 [2024-12-15 13:37:16.244553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.771 [2024-12-15 13:37:16.244582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:10.771 [2024-12-15 13:37:16.252049] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190ebb98 00:23:10.771 [2024-12-15 13:37:16.252987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.771 [2024-12-15 13:37:16.253031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:10.771 [2024-12-15 13:37:16.262067] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190ec408 00:23:10.771 [2024-12-15 13:37:16.263203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.771 [2024-12-15 13:37:16.263248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.771 [2024-12-15 13:37:16.271364] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fe2e8 00:23:10.771 [2024-12-15 13:37:16.272861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.771 [2024-12-15 13:37:16.272906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.771 [2024-12-15 13:37:16.279349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e6738 00:23:10.771 [2024-12-15 13:37:16.279817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.771 [2024-12-15 13:37:16.279845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:10.771 [2024-12-15 13:37:16.290127] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e4140 00:23:10.771 [2024-12-15 13:37:16.290950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.771 [2024-12-15 13:37:16.290980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.771 [2024-12-15 13:37:16.299127] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190df118 00:23:10.771 [2024-12-15 13:37:16.300574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.771 [2024-12-15 13:37:16.300646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.771 [2024-12-15 13:37:16.308612] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190df550 00:23:10.771 [2024-12-15 13:37:16.309676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.771 [2024-12-15 13:37:16.309738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.771 [2024-12-15 13:37:16.317702] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e1b48 00:23:10.771 [2024-12-15 13:37:16.318737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.771 [2024-12-15 13:37:16.318782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:10.771 [2024-12-15 13:37:16.327046] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fa3a0 00:23:10.771 [2024-12-15 13:37:16.328527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.771 [2024-12-15 13:37:16.328571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:10.771 [2024-12-15 13:37:16.335044] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190ff3c8 00:23:10.771 [2024-12-15 13:37:16.335408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.771 [2024-12-15 13:37:16.335432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:10.771 [2024-12-15 13:37:16.344199] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190de8a8 00:23:10.771 [2024-12-15 13:37:16.344515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.772 [2024-12-15 13:37:16.344538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:10.772 [2024-12-15 13:37:16.353250] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e5a90 00:23:10.772 [2024-12-15 13:37:16.353539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.772 [2024-12-15 13:37:16.353598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:10.772 [2024-12-15 13:37:16.362348] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fd640 00:23:10.772 [2024-12-15 13:37:16.362635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.772 [2024-12-15 13:37:16.362659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:10.772 [2024-12-15 13:37:16.371414] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e23b8 00:23:10.772 [2024-12-15 13:37:16.371680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.772 [2024-12-15 13:37:16.371706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:10.772 [2024-12-15 13:37:16.380471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e4140 00:23:10.772 [2024-12-15 13:37:16.380713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.772 [2024-12-15 13:37:16.380737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:10.772 [2024-12-15 13:37:16.389516] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190eb760 00:23:10.772 [2024-12-15 13:37:16.389769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.772 [2024-12-15 13:37:16.389789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:10.772 [2024-12-15 13:37:16.398590] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190ea248 00:23:10.772 [2024-12-15 13:37:16.398932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.772 [2024-12-15 13:37:16.398953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:10.772 [2024-12-15 13:37:16.408235] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e2c28 00:23:10.772 [2024-12-15 13:37:16.409155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.772 [2024-12-15 13:37:16.409185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:10.772 [2024-12-15 13:37:16.416808] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f9b30 00:23:10.772 [2024-12-15 13:37:16.416895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.772 [2024-12-15 13:37:16.416914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:10.772 [2024-12-15 13:37:16.428241] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f81e0 00:23:10.772 [2024-12-15 13:37:16.428767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.772 [2024-12-15 13:37:16.428792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:10.772 [2024-12-15 13:37:16.441722] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fd640 00:23:10.772 [2024-12-15 13:37:16.443161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.772 [2024-12-15 13:37:16.443193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:10.772 [2024-12-15 13:37:16.454458] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190ddc00 00:23:10.772 [2024-12-15 13:37:16.455017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.772 [2024-12-15 13:37:16.455049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:11.030 [2024-12-15 13:37:16.464377] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fac10 00:23:11.030 [2024-12-15 13:37:16.465008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.030 [2024-12-15 13:37:16.465040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:11.030 [2024-12-15 13:37:16.472808] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190df988 00:23:11.030 [2024-12-15 13:37:16.472956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.030 [2024-12-15 13:37:16.472975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:11.030 [2024-12-15 13:37:16.484996] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190e7818 00:23:11.030 [2024-12-15 13:37:16.485790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.030 [2024-12-15 13:37:16.485834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:11.030 [2024-12-15 13:37:16.494061] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190f2948 00:23:11.030 [2024-12-15 13:37:16.495262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.030 [2024-12-15 13:37:16.495307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:11.030 [2024-12-15 13:37:16.503399] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9380e0) with pdu=0x2000190fe2e8 00:23:11.030 [2024-12-15 13:37:16.503779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:11.030 [2024-12-15 13:37:16.503799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:11.030 00:23:11.030 Latency(us) 00:23:11.030 [2024-12-15T13:37:16.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.030 [2024-12-15T13:37:16.720Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:11.030 nvme0n1 : 2.00 26696.32 104.28 0.00 0.00 4789.63 1824.58 14477.50 00:23:11.030 [2024-12-15T13:37:16.720Z] =================================================================================================================== 00:23:11.030 [2024-12-15T13:37:16.720Z] Total : 26696.32 104.28 0.00 0.00 4789.63 1824.58 14477.50 00:23:11.030 0 00:23:11.030 13:37:16 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:11.030 13:37:16 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:11.030 13:37:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:11.030 13:37:16 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:11.030 | .driver_specific 00:23:11.030 | .nvme_error 00:23:11.030 | .status_code 00:23:11.030 | .command_transient_transport_error' 00:23:11.289 13:37:16 -- host/digest.sh@71 -- # (( 209 > 0 )) 00:23:11.289 13:37:16 -- host/digest.sh@73 -- # killprocess 97917 00:23:11.289 13:37:16 -- common/autotest_common.sh@936 -- # '[' -z 97917 ']' 00:23:11.289 13:37:16 -- common/autotest_common.sh@940 -- # kill -0 97917 00:23:11.289 13:37:16 -- common/autotest_common.sh@941 -- # uname 00:23:11.289 13:37:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:11.289 13:37:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97917 00:23:11.289 13:37:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:11.289 killing process with pid 97917 00:23:11.289 Received shutdown signal, test time was about 2.000000 seconds 00:23:11.289 00:23:11.289 Latency(us) 00:23:11.289 [2024-12-15T13:37:16.979Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.289 [2024-12-15T13:37:16.979Z] =================================================================================================================== 00:23:11.289 [2024-12-15T13:37:16.979Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:11.289 13:37:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:11.289 13:37:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97917' 00:23:11.289 13:37:16 -- common/autotest_common.sh@955 -- # kill 97917 00:23:11.289 13:37:16 -- common/autotest_common.sh@960 -- # wait 97917 00:23:11.548 13:37:17 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:23:11.548 13:37:17 -- host/digest.sh@54 -- # local rw bs qd 00:23:11.548 13:37:17 -- host/digest.sh@56 -- # rw=randwrite 00:23:11.548 13:37:17 -- host/digest.sh@56 -- # bs=131072 00:23:11.548 13:37:17 -- host/digest.sh@56 -- # qd=16 00:23:11.548 13:37:17 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:23:11.548 13:37:17 -- host/digest.sh@58 -- # bperfpid=98007 00:23:11.548 13:37:17 -- host/digest.sh@60 -- # waitforlisten 98007 /var/tmp/bperf.sock 00:23:11.548 13:37:17 -- common/autotest_common.sh@829 -- # '[' -z 98007 ']' 00:23:11.548 13:37:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:11.548 13:37:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:11.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:11.548 13:37:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:11.548 13:37:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:11.548 13:37:17 -- common/autotest_common.sh@10 -- # set +x 00:23:11.548 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:11.548 Zero copy mechanism will not be used. 00:23:11.548 [2024-12-15 13:37:17.076012] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:11.548 [2024-12-15 13:37:17.076100] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98007 ] 00:23:11.548 [2024-12-15 13:37:17.205178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.807 [2024-12-15 13:37:17.264712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.757 13:37:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:12.757 13:37:18 -- common/autotest_common.sh@862 -- # return 0 00:23:12.757 13:37:18 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:12.757 13:37:18 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:12.757 13:37:18 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:12.757 13:37:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.757 13:37:18 -- common/autotest_common.sh@10 -- # set +x 00:23:12.757 13:37:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.757 13:37:18 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:12.757 13:37:18 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:13.040 nvme0n1 00:23:13.040 13:37:18 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:13.040 13:37:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.040 13:37:18 -- common/autotest_common.sh@10 -- # set +x 00:23:13.040 13:37:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.040 13:37:18 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:13.040 13:37:18 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:13.300 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:13.300 Zero copy mechanism will not be used. 00:23:13.300 Running I/O for 2 seconds... 00:23:13.300 [2024-12-15 13:37:18.744251] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.300 [2024-12-15 13:37:18.744538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.300 [2024-12-15 13:37:18.744566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.300 [2024-12-15 13:37:18.749009] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.300 [2024-12-15 13:37:18.749148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.300 [2024-12-15 13:37:18.749171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.300 [2024-12-15 13:37:18.753232] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.300 [2024-12-15 13:37:18.753325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.300 [2024-12-15 13:37:18.753347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.300 [2024-12-15 13:37:18.757321] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.300 [2024-12-15 13:37:18.757428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.300 [2024-12-15 13:37:18.757448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.300 [2024-12-15 13:37:18.761509] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.300 [2024-12-15 13:37:18.761665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.300 [2024-12-15 13:37:18.761690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.300 [2024-12-15 13:37:18.765523] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.300 [2024-12-15 13:37:18.765642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.300 [2024-12-15 13:37:18.765670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.300 [2024-12-15 13:37:18.769713] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.300 [2024-12-15 13:37:18.769834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.300 [2024-12-15 13:37:18.769858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.300 [2024-12-15 13:37:18.773635] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.300 [2024-12-15 13:37:18.773865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.301 [2024-12-15 13:37:18.773919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.301 [2024-12-15 13:37:18.777375] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.301 [2024-12-15 13:37:18.777624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.301 [2024-12-15 13:37:18.777646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.301 [2024-12-15 13:37:18.781217] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.301 [2024-12-15 13:37:18.781323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.301 [2024-12-15 13:37:18.781343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.301 [2024-12-15 13:37:18.785014] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.301 [2024-12-15 13:37:18.785110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.301 [2024-12-15 13:37:18.785130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.301 [2024-12-15 13:37:18.788765] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.301 [2024-12-15 13:37:18.788873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.301 [2024-12-15 13:37:18.788893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.301 [2024-12-15 13:37:18.792578] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.301 [2024-12-15 13:37:18.792681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.301 [2024-12-15 13:37:18.792701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.301 [2024-12-15 13:37:18.796334] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.301 [2024-12-15 13:37:18.796455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.301 [2024-12-15 13:37:18.796475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.301 [2024-12-15 13:37:18.800141] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.301 [2024-12-15 13:37:18.800261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.301 [2024-12-15 13:37:18.800281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.301 [2024-12-15 13:37:18.803943] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.301 [2024-12-15 13:37:18.804168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.301 [2024-12-15 13:37:18.804190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.301 [2024-12-15 13:37:18.807724] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.301 [2024-12-15 13:37:18.807930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.301 [2024-12-15 13:37:18.807950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.301 [2024-12-15 13:37:18.811466] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.301 [2024-12-15 13:37:18.811591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.301 [2024-12-15 13:37:18.811611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.301 [2024-12-15 13:37:18.815238] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.301 [2024-12-15 13:37:18.815346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.301 [2024-12-15 13:37:18.815367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.301 [2024-12-15 13:37:18.818980] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.301 [2024-12-15 13:37:18.819085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.301 [2024-12-15 13:37:18.819105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.301 [2024-12-15 13:37:18.822729] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.301 [2024-12-15 13:37:18.822830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.301 [2024-12-15 13:37:18.822850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.301 [2024-12-15 13:37:18.826473] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.301 [2024-12-15 13:37:18.826612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.301 [2024-12-15 13:37:18.826645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.301 [2024-12-15 13:37:18.830312] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.301 [2024-12-15 13:37:18.830433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.301 [2024-12-15 13:37:18.830454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.301 [2024-12-15 13:37:18.834187] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.301 [2024-12-15 13:37:18.834391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.301 [2024-12-15 13:37:18.834427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.301 [2024-12-15 13:37:18.837970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.301 [2024-12-15 13:37:18.838204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.301 [2024-12-15 13:37:18.838230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.301 [2024-12-15 13:37:18.841806] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.301 [2024-12-15 13:37:18.841902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.301 [2024-12-15 13:37:18.841922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.301 [2024-12-15 13:37:18.845506] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.301 [2024-12-15 13:37:18.845650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.301 [2024-12-15 13:37:18.845682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.301 [2024-12-15 13:37:18.849231] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.301 [2024-12-15 13:37:18.849328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.301 [2024-12-15 13:37:18.849348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.301 [2024-12-15 13:37:18.852970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.301 [2024-12-15 13:37:18.853077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.301 [2024-12-15 13:37:18.853097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.301 [2024-12-15 13:37:18.856715] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.301 [2024-12-15 13:37:18.856833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.301 [2024-12-15 13:37:18.856853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.301 [2024-12-15 13:37:18.860414] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.301 [2024-12-15 13:37:18.860533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.301 [2024-12-15 13:37:18.860553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.302 [2024-12-15 13:37:18.864294] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.302 [2024-12-15 13:37:18.864473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.302 [2024-12-15 13:37:18.864499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.302 [2024-12-15 13:37:18.868108] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.302 [2024-12-15 13:37:18.868293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.302 [2024-12-15 13:37:18.868318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.302 [2024-12-15 13:37:18.871922] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.302 [2024-12-15 13:37:18.872069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.302 [2024-12-15 13:37:18.872090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.302 [2024-12-15 13:37:18.875752] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.302 [2024-12-15 13:37:18.875844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.302 [2024-12-15 13:37:18.875864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.302 [2024-12-15 13:37:18.879462] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.302 [2024-12-15 13:37:18.879571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.302 [2024-12-15 13:37:18.879590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.302 [2024-12-15 13:37:18.883206] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.302 [2024-12-15 13:37:18.883316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.302 [2024-12-15 13:37:18.883336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.302 [2024-12-15 13:37:18.887072] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.302 [2024-12-15 13:37:18.887191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.302 [2024-12-15 13:37:18.887211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.302 [2024-12-15 13:37:18.890868] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.302 [2024-12-15 13:37:18.891008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.302 [2024-12-15 13:37:18.891028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.302 [2024-12-15 13:37:18.894761] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.302 [2024-12-15 13:37:18.894960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.302 [2024-12-15 13:37:18.894981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.302 [2024-12-15 13:37:18.898503] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.302 [2024-12-15 13:37:18.898787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.302 [2024-12-15 13:37:18.898810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.302 [2024-12-15 13:37:18.902315] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.302 [2024-12-15 13:37:18.902436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.302 [2024-12-15 13:37:18.902456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.302 [2024-12-15 13:37:18.906092] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.302 [2024-12-15 13:37:18.906192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.302 [2024-12-15 13:37:18.906212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.302 [2024-12-15 13:37:18.909832] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.302 [2024-12-15 13:37:18.909971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.302 [2024-12-15 13:37:18.909991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.302 [2024-12-15 13:37:18.913478] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.302 [2024-12-15 13:37:18.913598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.302 [2024-12-15 13:37:18.913630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.302 [2024-12-15 13:37:18.917232] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.302 [2024-12-15 13:37:18.917334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.302 [2024-12-15 13:37:18.917354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.302 [2024-12-15 13:37:18.921026] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.302 [2024-12-15 13:37:18.921131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.302 [2024-12-15 13:37:18.921151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.302 [2024-12-15 13:37:18.924850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.302 [2024-12-15 13:37:18.925029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.302 [2024-12-15 13:37:18.925055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.302 [2024-12-15 13:37:18.928566] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.302 [2024-12-15 13:37:18.928784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.302 [2024-12-15 13:37:18.928804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.302 [2024-12-15 13:37:18.932308] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.302 [2024-12-15 13:37:18.932441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.302 [2024-12-15 13:37:18.932461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.302 [2024-12-15 13:37:18.936136] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.302 [2024-12-15 13:37:18.936244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.302 [2024-12-15 13:37:18.936265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.302 [2024-12-15 13:37:18.939907] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.302 [2024-12-15 13:37:18.940030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.302 [2024-12-15 13:37:18.940050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.302 [2024-12-15 13:37:18.943613] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.302 [2024-12-15 13:37:18.943722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.302 [2024-12-15 13:37:18.943743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.302 [2024-12-15 13:37:18.947323] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.302 [2024-12-15 13:37:18.947441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.302 [2024-12-15 13:37:18.947461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.302 [2024-12-15 13:37:18.951189] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.302 [2024-12-15 13:37:18.951297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.303 [2024-12-15 13:37:18.951318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.303 [2024-12-15 13:37:18.954994] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.303 [2024-12-15 13:37:18.955189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.303 [2024-12-15 13:37:18.955209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.303 [2024-12-15 13:37:18.958668] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.303 [2024-12-15 13:37:18.958872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.303 [2024-12-15 13:37:18.958892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.303 [2024-12-15 13:37:18.962385] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.303 [2024-12-15 13:37:18.962482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.303 [2024-12-15 13:37:18.962503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.303 [2024-12-15 13:37:18.966196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.303 [2024-12-15 13:37:18.966287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.303 [2024-12-15 13:37:18.966307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.303 [2024-12-15 13:37:18.970071] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.303 [2024-12-15 13:37:18.970188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.303 [2024-12-15 13:37:18.970208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.303 [2024-12-15 13:37:18.973837] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.303 [2024-12-15 13:37:18.973957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.303 [2024-12-15 13:37:18.973978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.303 [2024-12-15 13:37:18.977502] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.303 [2024-12-15 13:37:18.977665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.303 [2024-12-15 13:37:18.977686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.303 [2024-12-15 13:37:18.981282] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.303 [2024-12-15 13:37:18.981402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.303 [2024-12-15 13:37:18.981423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.303 [2024-12-15 13:37:18.985084] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.303 [2024-12-15 13:37:18.985280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.303 [2024-12-15 13:37:18.985301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.563 [2024-12-15 13:37:18.988807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.563 [2024-12-15 13:37:18.989013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.563 [2024-12-15 13:37:18.989038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.563 [2024-12-15 13:37:18.992720] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.563 [2024-12-15 13:37:18.992884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.563 [2024-12-15 13:37:18.992910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.563 [2024-12-15 13:37:18.996430] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.563 [2024-12-15 13:37:18.996511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.563 [2024-12-15 13:37:18.996532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.563 [2024-12-15 13:37:19.000118] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.563 [2024-12-15 13:37:19.000217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.563 [2024-12-15 13:37:19.000236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.563 [2024-12-15 13:37:19.003808] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.563 [2024-12-15 13:37:19.003913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.563 [2024-12-15 13:37:19.003933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.563 [2024-12-15 13:37:19.007485] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.563 [2024-12-15 13:37:19.007640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.563 [2024-12-15 13:37:19.007661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.563 [2024-12-15 13:37:19.011263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.563 [2024-12-15 13:37:19.011385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.563 [2024-12-15 13:37:19.011406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.563 [2024-12-15 13:37:19.015120] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.563 [2024-12-15 13:37:19.015314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.563 [2024-12-15 13:37:19.015336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.563 [2024-12-15 13:37:19.018831] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.563 [2024-12-15 13:37:19.019014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.564 [2024-12-15 13:37:19.019034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.564 [2024-12-15 13:37:19.022538] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.564 [2024-12-15 13:37:19.022703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.564 [2024-12-15 13:37:19.022724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.564 [2024-12-15 13:37:19.026215] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.564 [2024-12-15 13:37:19.026327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.564 [2024-12-15 13:37:19.026347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.564 [2024-12-15 13:37:19.030025] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.564 [2024-12-15 13:37:19.030143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.564 [2024-12-15 13:37:19.030163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.564 [2024-12-15 13:37:19.033806] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.564 [2024-12-15 13:37:19.033897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.564 [2024-12-15 13:37:19.033933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.564 [2024-12-15 13:37:19.037544] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.564 [2024-12-15 13:37:19.037683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.564 [2024-12-15 13:37:19.037703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.564 [2024-12-15 13:37:19.041389] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.564 [2024-12-15 13:37:19.041493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.564 [2024-12-15 13:37:19.041514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.564 [2024-12-15 13:37:19.045235] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.564 [2024-12-15 13:37:19.045444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.564 [2024-12-15 13:37:19.045464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.564 [2024-12-15 13:37:19.049049] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.564 [2024-12-15 13:37:19.049245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.564 [2024-12-15 13:37:19.049271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.564 [2024-12-15 13:37:19.052893] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.564 [2024-12-15 13:37:19.053035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.564 [2024-12-15 13:37:19.053058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.564 [2024-12-15 13:37:19.056747] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.564 [2024-12-15 13:37:19.056828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.564 [2024-12-15 13:37:19.056849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.564 [2024-12-15 13:37:19.060504] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.564 [2024-12-15 13:37:19.060581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.564 [2024-12-15 13:37:19.060629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.564 [2024-12-15 13:37:19.064152] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.564 [2024-12-15 13:37:19.064241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.564 [2024-12-15 13:37:19.064261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.564 [2024-12-15 13:37:19.067830] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.564 [2024-12-15 13:37:19.067946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.564 [2024-12-15 13:37:19.067967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.564 [2024-12-15 13:37:19.071519] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.564 [2024-12-15 13:37:19.071670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.564 [2024-12-15 13:37:19.071691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.564 [2024-12-15 13:37:19.075471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.564 [2024-12-15 13:37:19.075696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.564 [2024-12-15 13:37:19.075717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.564 [2024-12-15 13:37:19.079185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.564 [2024-12-15 13:37:19.079383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.564 [2024-12-15 13:37:19.079404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.564 [2024-12-15 13:37:19.082973] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.564 [2024-12-15 13:37:19.083103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.564 [2024-12-15 13:37:19.083123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.564 [2024-12-15 13:37:19.086736] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.564 [2024-12-15 13:37:19.086812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.564 [2024-12-15 13:37:19.086832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.564 [2024-12-15 13:37:19.090407] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.564 [2024-12-15 13:37:19.090535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.564 [2024-12-15 13:37:19.090556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.564 [2024-12-15 13:37:19.094139] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.564 [2024-12-15 13:37:19.094229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.564 [2024-12-15 13:37:19.094249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.564 [2024-12-15 13:37:19.098008] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.564 [2024-12-15 13:37:19.098122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.564 [2024-12-15 13:37:19.098143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.564 [2024-12-15 13:37:19.101670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.564 [2024-12-15 13:37:19.101797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.564 [2024-12-15 13:37:19.101818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.564 [2024-12-15 13:37:19.105388] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.564 [2024-12-15 13:37:19.105621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.565 [2024-12-15 13:37:19.105642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.565 [2024-12-15 13:37:19.109166] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.565 [2024-12-15 13:37:19.109353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.565 [2024-12-15 13:37:19.109373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.565 [2024-12-15 13:37:19.112952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.565 [2024-12-15 13:37:19.113054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.565 [2024-12-15 13:37:19.113075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.565 [2024-12-15 13:37:19.116560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.565 [2024-12-15 13:37:19.116700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.565 [2024-12-15 13:37:19.116720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.565 [2024-12-15 13:37:19.120314] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.565 [2024-12-15 13:37:19.120415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.565 [2024-12-15 13:37:19.120436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.565 [2024-12-15 13:37:19.124011] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.565 [2024-12-15 13:37:19.124118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.565 [2024-12-15 13:37:19.124137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.565 [2024-12-15 13:37:19.127687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.565 [2024-12-15 13:37:19.127809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.565 [2024-12-15 13:37:19.127830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.565 [2024-12-15 13:37:19.131372] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.565 [2024-12-15 13:37:19.131493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.565 [2024-12-15 13:37:19.131514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.565 [2024-12-15 13:37:19.135235] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.565 [2024-12-15 13:37:19.135433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.565 [2024-12-15 13:37:19.135454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.565 [2024-12-15 13:37:19.138949] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.565 [2024-12-15 13:37:19.139194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.565 [2024-12-15 13:37:19.139217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.565 [2024-12-15 13:37:19.142831] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.565 [2024-12-15 13:37:19.142957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.565 [2024-12-15 13:37:19.142978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.565 [2024-12-15 13:37:19.146526] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.565 [2024-12-15 13:37:19.146627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.565 [2024-12-15 13:37:19.146658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.565 [2024-12-15 13:37:19.150191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.565 [2024-12-15 13:37:19.150285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.565 [2024-12-15 13:37:19.150306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.565 [2024-12-15 13:37:19.153874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.565 [2024-12-15 13:37:19.153980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.565 [2024-12-15 13:37:19.154000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.565 [2024-12-15 13:37:19.157493] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.565 [2024-12-15 13:37:19.157647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.565 [2024-12-15 13:37:19.157668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.565 [2024-12-15 13:37:19.161168] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.565 [2024-12-15 13:37:19.161298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.565 [2024-12-15 13:37:19.161318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.565 [2024-12-15 13:37:19.165001] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.565 [2024-12-15 13:37:19.165195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.565 [2024-12-15 13:37:19.165215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.565 [2024-12-15 13:37:19.168663] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.565 [2024-12-15 13:37:19.168838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.565 [2024-12-15 13:37:19.168858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.565 [2024-12-15 13:37:19.172514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.565 [2024-12-15 13:37:19.172673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.565 [2024-12-15 13:37:19.172699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.565 [2024-12-15 13:37:19.176258] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.565 [2024-12-15 13:37:19.176366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.565 [2024-12-15 13:37:19.176386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.565 [2024-12-15 13:37:19.179904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.565 [2024-12-15 13:37:19.180012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.565 [2024-12-15 13:37:19.180032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.565 [2024-12-15 13:37:19.183693] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.565 [2024-12-15 13:37:19.183789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.565 [2024-12-15 13:37:19.183810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.565 [2024-12-15 13:37:19.187677] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.565 [2024-12-15 13:37:19.187830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.565 [2024-12-15 13:37:19.187851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.565 [2024-12-15 13:37:19.191692] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.565 [2024-12-15 13:37:19.191822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.565 [2024-12-15 13:37:19.191877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.566 [2024-12-15 13:37:19.196143] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.566 [2024-12-15 13:37:19.196341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.566 [2024-12-15 13:37:19.196366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.566 [2024-12-15 13:37:19.200285] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.566 [2024-12-15 13:37:19.200497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.566 [2024-12-15 13:37:19.200518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.566 [2024-12-15 13:37:19.204459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.566 [2024-12-15 13:37:19.204650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.566 [2024-12-15 13:37:19.204689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.566 [2024-12-15 13:37:19.208689] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.566 [2024-12-15 13:37:19.208791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.566 [2024-12-15 13:37:19.208813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.566 [2024-12-15 13:37:19.212744] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.566 [2024-12-15 13:37:19.212852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.566 [2024-12-15 13:37:19.212874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.566 [2024-12-15 13:37:19.216794] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.566 [2024-12-15 13:37:19.216889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.566 [2024-12-15 13:37:19.216910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.566 [2024-12-15 13:37:19.220839] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.566 [2024-12-15 13:37:19.221000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.566 [2024-12-15 13:37:19.221021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.566 [2024-12-15 13:37:19.224767] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.566 [2024-12-15 13:37:19.224926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.566 [2024-12-15 13:37:19.224948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.566 [2024-12-15 13:37:19.228749] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.566 [2024-12-15 13:37:19.228973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.566 [2024-12-15 13:37:19.228994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.566 [2024-12-15 13:37:19.232915] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.566 [2024-12-15 13:37:19.233154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.566 [2024-12-15 13:37:19.233175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.566 [2024-12-15 13:37:19.236854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.566 [2024-12-15 13:37:19.237016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.566 [2024-12-15 13:37:19.237037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.566 [2024-12-15 13:37:19.240708] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.566 [2024-12-15 13:37:19.240803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.566 [2024-12-15 13:37:19.240823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.566 [2024-12-15 13:37:19.244466] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.566 [2024-12-15 13:37:19.244574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.566 [2024-12-15 13:37:19.244595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.566 [2024-12-15 13:37:19.248292] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.566 [2024-12-15 13:37:19.248384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.566 [2024-12-15 13:37:19.248405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.826 [2024-12-15 13:37:19.252294] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.826 [2024-12-15 13:37:19.252422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.826 [2024-12-15 13:37:19.252449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.826 [2024-12-15 13:37:19.256057] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.826 [2024-12-15 13:37:19.256154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.826 [2024-12-15 13:37:19.256174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.826 [2024-12-15 13:37:19.259991] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.826 [2024-12-15 13:37:19.260216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.826 [2024-12-15 13:37:19.260238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.826 [2024-12-15 13:37:19.263854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.826 [2024-12-15 13:37:19.264048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.826 [2024-12-15 13:37:19.264074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.826 [2024-12-15 13:37:19.267855] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.826 [2024-12-15 13:37:19.268036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.826 [2024-12-15 13:37:19.268062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.826 [2024-12-15 13:37:19.271749] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.826 [2024-12-15 13:37:19.271842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.826 [2024-12-15 13:37:19.271863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.826 [2024-12-15 13:37:19.275451] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.826 [2024-12-15 13:37:19.275560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.826 [2024-12-15 13:37:19.275581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.826 [2024-12-15 13:37:19.279276] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.826 [2024-12-15 13:37:19.279392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.826 [2024-12-15 13:37:19.279412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.826 [2024-12-15 13:37:19.283144] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.826 [2024-12-15 13:37:19.283291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.827 [2024-12-15 13:37:19.283312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.827 [2024-12-15 13:37:19.287174] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.827 [2024-12-15 13:37:19.287274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.827 [2024-12-15 13:37:19.287294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.827 [2024-12-15 13:37:19.291188] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.827 [2024-12-15 13:37:19.291399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.827 [2024-12-15 13:37:19.291421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.827 [2024-12-15 13:37:19.294883] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.827 [2024-12-15 13:37:19.295112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.827 [2024-12-15 13:37:19.295133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.827 [2024-12-15 13:37:19.298545] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.827 [2024-12-15 13:37:19.298789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.827 [2024-12-15 13:37:19.298812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.827 [2024-12-15 13:37:19.302431] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.827 [2024-12-15 13:37:19.302537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.827 [2024-12-15 13:37:19.302558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.827 [2024-12-15 13:37:19.306382] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.827 [2024-12-15 13:37:19.306479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.827 [2024-12-15 13:37:19.306499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.827 [2024-12-15 13:37:19.310246] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.827 [2024-12-15 13:37:19.310341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.827 [2024-12-15 13:37:19.310361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.827 [2024-12-15 13:37:19.314165] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.827 [2024-12-15 13:37:19.314310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.827 [2024-12-15 13:37:19.314331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.827 [2024-12-15 13:37:19.317932] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.827 [2024-12-15 13:37:19.318094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.827 [2024-12-15 13:37:19.318114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.827 [2024-12-15 13:37:19.321886] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.827 [2024-12-15 13:37:19.322138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.827 [2024-12-15 13:37:19.322163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.827 [2024-12-15 13:37:19.325935] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.827 [2024-12-15 13:37:19.326137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.827 [2024-12-15 13:37:19.326157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.827 [2024-12-15 13:37:19.329713] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.827 [2024-12-15 13:37:19.329904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.827 [2024-12-15 13:37:19.329940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.827 [2024-12-15 13:37:19.333459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.827 [2024-12-15 13:37:19.333597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.827 [2024-12-15 13:37:19.333630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.827 [2024-12-15 13:37:19.337254] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.827 [2024-12-15 13:37:19.337331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.827 [2024-12-15 13:37:19.337351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.827 [2024-12-15 13:37:19.341113] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.827 [2024-12-15 13:37:19.341206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.827 [2024-12-15 13:37:19.341228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.827 [2024-12-15 13:37:19.345071] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.827 [2024-12-15 13:37:19.345207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.827 [2024-12-15 13:37:19.345231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.827 [2024-12-15 13:37:19.348904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.827 [2024-12-15 13:37:19.349059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.827 [2024-12-15 13:37:19.349086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.827 [2024-12-15 13:37:19.352858] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.827 [2024-12-15 13:37:19.353055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.827 [2024-12-15 13:37:19.353081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.827 [2024-12-15 13:37:19.356730] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.827 [2024-12-15 13:37:19.356893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.827 [2024-12-15 13:37:19.356919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.827 [2024-12-15 13:37:19.360648] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.827 [2024-12-15 13:37:19.360802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.827 [2024-12-15 13:37:19.360828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.827 [2024-12-15 13:37:19.364444] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.827 [2024-12-15 13:37:19.364536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.827 [2024-12-15 13:37:19.364557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.827 [2024-12-15 13:37:19.368188] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.827 [2024-12-15 13:37:19.368288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.827 [2024-12-15 13:37:19.368308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.827 [2024-12-15 13:37:19.371981] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.827 [2024-12-15 13:37:19.372071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.827 [2024-12-15 13:37:19.372091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.827 [2024-12-15 13:37:19.375973] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.827 [2024-12-15 13:37:19.376119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.827 [2024-12-15 13:37:19.376140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.827 [2024-12-15 13:37:19.379715] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.827 [2024-12-15 13:37:19.379866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.827 [2024-12-15 13:37:19.379886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.827 [2024-12-15 13:37:19.383511] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.827 [2024-12-15 13:37:19.383762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.827 [2024-12-15 13:37:19.383784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.827 [2024-12-15 13:37:19.387525] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.827 [2024-12-15 13:37:19.387762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.827 [2024-12-15 13:37:19.387788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.827 [2024-12-15 13:37:19.391411] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.827 [2024-12-15 13:37:19.391576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.827 [2024-12-15 13:37:19.391627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.827 [2024-12-15 13:37:19.395386] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.827 [2024-12-15 13:37:19.395515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.828 [2024-12-15 13:37:19.395535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.828 [2024-12-15 13:37:19.399267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.828 [2024-12-15 13:37:19.399376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.828 [2024-12-15 13:37:19.399395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.828 [2024-12-15 13:37:19.403134] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.828 [2024-12-15 13:37:19.403243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.828 [2024-12-15 13:37:19.403262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.828 [2024-12-15 13:37:19.406927] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.828 [2024-12-15 13:37:19.407071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.828 [2024-12-15 13:37:19.407098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.828 [2024-12-15 13:37:19.410623] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.828 [2024-12-15 13:37:19.410757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.828 [2024-12-15 13:37:19.410777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.828 [2024-12-15 13:37:19.414411] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.828 [2024-12-15 13:37:19.414626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.828 [2024-12-15 13:37:19.414667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.828 [2024-12-15 13:37:19.418196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.828 [2024-12-15 13:37:19.418379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.828 [2024-12-15 13:37:19.418404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.828 [2024-12-15 13:37:19.421901] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.828 [2024-12-15 13:37:19.422085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.828 [2024-12-15 13:37:19.422106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.828 [2024-12-15 13:37:19.425459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.828 [2024-12-15 13:37:19.425596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.828 [2024-12-15 13:37:19.425627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.828 [2024-12-15 13:37:19.429372] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.828 [2024-12-15 13:37:19.429458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.828 [2024-12-15 13:37:19.429478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.828 [2024-12-15 13:37:19.433118] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.828 [2024-12-15 13:37:19.433191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.828 [2024-12-15 13:37:19.433211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.828 [2024-12-15 13:37:19.437001] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.828 [2024-12-15 13:37:19.437136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.828 [2024-12-15 13:37:19.437163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.828 [2024-12-15 13:37:19.440741] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.828 [2024-12-15 13:37:19.440896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.828 [2024-12-15 13:37:19.440921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.828 [2024-12-15 13:37:19.444559] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.828 [2024-12-15 13:37:19.444747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.828 [2024-12-15 13:37:19.444773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.828 [2024-12-15 13:37:19.448294] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.828 [2024-12-15 13:37:19.448465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.828 [2024-12-15 13:37:19.448491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.828 [2024-12-15 13:37:19.451974] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.828 [2024-12-15 13:37:19.452128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.828 [2024-12-15 13:37:19.452154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.828 [2024-12-15 13:37:19.455610] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.828 [2024-12-15 13:37:19.455715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.828 [2024-12-15 13:37:19.455735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.828 [2024-12-15 13:37:19.459274] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.828 [2024-12-15 13:37:19.459368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.828 [2024-12-15 13:37:19.459388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.828 [2024-12-15 13:37:19.463060] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.828 [2024-12-15 13:37:19.463171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.828 [2024-12-15 13:37:19.463192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.828 [2024-12-15 13:37:19.466981] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.828 [2024-12-15 13:37:19.467145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.828 [2024-12-15 13:37:19.467170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.828 [2024-12-15 13:37:19.470849] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.828 [2024-12-15 13:37:19.471006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.828 [2024-12-15 13:37:19.471032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.828 [2024-12-15 13:37:19.474805] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.828 [2024-12-15 13:37:19.475005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.828 [2024-12-15 13:37:19.475030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.828 [2024-12-15 13:37:19.478584] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.828 [2024-12-15 13:37:19.478839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.828 [2024-12-15 13:37:19.478866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.828 [2024-12-15 13:37:19.482362] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.828 [2024-12-15 13:37:19.482472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.828 [2024-12-15 13:37:19.482492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.828 [2024-12-15 13:37:19.486337] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.829 [2024-12-15 13:37:19.486449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.829 [2024-12-15 13:37:19.486469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.829 [2024-12-15 13:37:19.490264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.829 [2024-12-15 13:37:19.490345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.829 [2024-12-15 13:37:19.490366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.829 [2024-12-15 13:37:19.494120] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.829 [2024-12-15 13:37:19.494209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.829 [2024-12-15 13:37:19.494229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.829 [2024-12-15 13:37:19.498070] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.829 [2024-12-15 13:37:19.498213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.829 [2024-12-15 13:37:19.498233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.829 [2024-12-15 13:37:19.501960] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.829 [2024-12-15 13:37:19.502103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.829 [2024-12-15 13:37:19.502123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.829 [2024-12-15 13:37:19.505956] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.829 [2024-12-15 13:37:19.506151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.829 [2024-12-15 13:37:19.506171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.829 [2024-12-15 13:37:19.509623] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.829 [2024-12-15 13:37:19.509820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.829 [2024-12-15 13:37:19.509842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.829 [2024-12-15 13:37:19.513369] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:13.829 [2024-12-15 13:37:19.513521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.829 [2024-12-15 13:37:19.513547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.090 [2024-12-15 13:37:19.517096] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.090 [2024-12-15 13:37:19.517173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-12-15 13:37:19.517193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.090 [2024-12-15 13:37:19.520821] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.090 [2024-12-15 13:37:19.520914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-12-15 13:37:19.520934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.090 [2024-12-15 13:37:19.524530] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.090 [2024-12-15 13:37:19.524618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-12-15 13:37:19.524639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.090 [2024-12-15 13:37:19.528265] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.090 [2024-12-15 13:37:19.528389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-12-15 13:37:19.528411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.090 [2024-12-15 13:37:19.531839] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.090 [2024-12-15 13:37:19.531978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-12-15 13:37:19.531999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.090 [2024-12-15 13:37:19.535762] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.090 [2024-12-15 13:37:19.535957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-12-15 13:37:19.535977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.090 [2024-12-15 13:37:19.539426] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.090 [2024-12-15 13:37:19.539611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-12-15 13:37:19.539659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.090 [2024-12-15 13:37:19.543057] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.090 [2024-12-15 13:37:19.543215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-12-15 13:37:19.543236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.090 [2024-12-15 13:37:19.546690] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.090 [2024-12-15 13:37:19.546789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-12-15 13:37:19.546810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.090 [2024-12-15 13:37:19.550427] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.090 [2024-12-15 13:37:19.550534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-12-15 13:37:19.550554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.090 [2024-12-15 13:37:19.554087] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.090 [2024-12-15 13:37:19.554175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-12-15 13:37:19.554195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.090 [2024-12-15 13:37:19.557820] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.090 [2024-12-15 13:37:19.557950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-12-15 13:37:19.557987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.090 [2024-12-15 13:37:19.561412] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.090 [2024-12-15 13:37:19.561550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-12-15 13:37:19.561598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.090 [2024-12-15 13:37:19.565184] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.090 [2024-12-15 13:37:19.565375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-12-15 13:37:19.565395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.090 [2024-12-15 13:37:19.568844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.090 [2024-12-15 13:37:19.569086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-12-15 13:37:19.569111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.090 [2024-12-15 13:37:19.572529] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.090 [2024-12-15 13:37:19.572686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-12-15 13:37:19.572711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.090 [2024-12-15 13:37:19.576301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.090 [2024-12-15 13:37:19.576377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-12-15 13:37:19.576398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.090 [2024-12-15 13:37:19.579993] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.090 [2024-12-15 13:37:19.580094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-12-15 13:37:19.580114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.090 [2024-12-15 13:37:19.583599] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.090 [2024-12-15 13:37:19.583704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-12-15 13:37:19.583724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.090 [2024-12-15 13:37:19.587264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.090 [2024-12-15 13:37:19.587407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-12-15 13:37:19.587427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.090 [2024-12-15 13:37:19.591054] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.090 [2024-12-15 13:37:19.591193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-12-15 13:37:19.591213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.090 [2024-12-15 13:37:19.594816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.090 [2024-12-15 13:37:19.595010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-12-15 13:37:19.595030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.090 [2024-12-15 13:37:19.598509] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.090 [2024-12-15 13:37:19.598721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.090 [2024-12-15 13:37:19.598742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.091 [2024-12-15 13:37:19.602141] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.091 [2024-12-15 13:37:19.602300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-12-15 13:37:19.602321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.091 [2024-12-15 13:37:19.605840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.091 [2024-12-15 13:37:19.605943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-12-15 13:37:19.605963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.091 [2024-12-15 13:37:19.609473] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.091 [2024-12-15 13:37:19.609624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-12-15 13:37:19.609646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.091 [2024-12-15 13:37:19.613230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.091 [2024-12-15 13:37:19.613306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-12-15 13:37:19.613326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.091 [2024-12-15 13:37:19.617007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.091 [2024-12-15 13:37:19.617133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-12-15 13:37:19.617154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.091 [2024-12-15 13:37:19.620729] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.091 [2024-12-15 13:37:19.620862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-12-15 13:37:19.620888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.091 [2024-12-15 13:37:19.624509] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.091 [2024-12-15 13:37:19.624721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-12-15 13:37:19.624747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.091 [2024-12-15 13:37:19.628285] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.091 [2024-12-15 13:37:19.628475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-12-15 13:37:19.628501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.091 [2024-12-15 13:37:19.632040] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.091 [2024-12-15 13:37:19.632191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-12-15 13:37:19.632217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.091 [2024-12-15 13:37:19.635792] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.091 [2024-12-15 13:37:19.635879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-12-15 13:37:19.635900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.091 [2024-12-15 13:37:19.639385] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.091 [2024-12-15 13:37:19.639488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-12-15 13:37:19.639508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.091 [2024-12-15 13:37:19.643213] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.091 [2024-12-15 13:37:19.643289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-12-15 13:37:19.643310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.091 [2024-12-15 13:37:19.646899] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.091 [2024-12-15 13:37:19.647062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-12-15 13:37:19.647083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.091 [2024-12-15 13:37:19.650577] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.091 [2024-12-15 13:37:19.650750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-12-15 13:37:19.650771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.091 [2024-12-15 13:37:19.654321] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.091 [2024-12-15 13:37:19.654523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-12-15 13:37:19.654544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.091 [2024-12-15 13:37:19.658056] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.091 [2024-12-15 13:37:19.658259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-12-15 13:37:19.658284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.091 [2024-12-15 13:37:19.661763] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.091 [2024-12-15 13:37:19.661941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-12-15 13:37:19.661962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.091 [2024-12-15 13:37:19.665435] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.091 [2024-12-15 13:37:19.665529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-12-15 13:37:19.665549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.091 [2024-12-15 13:37:19.669092] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.091 [2024-12-15 13:37:19.669180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-12-15 13:37:19.669201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.091 [2024-12-15 13:37:19.672743] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.091 [2024-12-15 13:37:19.672830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-12-15 13:37:19.672850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.091 [2024-12-15 13:37:19.676460] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.091 [2024-12-15 13:37:19.676639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-12-15 13:37:19.676661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.091 [2024-12-15 13:37:19.680167] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.091 [2024-12-15 13:37:19.680291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-12-15 13:37:19.680311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.091 [2024-12-15 13:37:19.683959] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.091 [2024-12-15 13:37:19.684150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.091 [2024-12-15 13:37:19.684171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.091 [2024-12-15 13:37:19.687658] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.091 [2024-12-15 13:37:19.687888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.092 [2024-12-15 13:37:19.687930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.092 [2024-12-15 13:37:19.691429] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.092 [2024-12-15 13:37:19.691575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.092 [2024-12-15 13:37:19.691626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.092 [2024-12-15 13:37:19.695179] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.092 [2024-12-15 13:37:19.695291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.092 [2024-12-15 13:37:19.695312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.092 [2024-12-15 13:37:19.698825] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.092 [2024-12-15 13:37:19.698933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.092 [2024-12-15 13:37:19.698953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.092 [2024-12-15 13:37:19.702520] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.092 [2024-12-15 13:37:19.702609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.092 [2024-12-15 13:37:19.702642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.092 [2024-12-15 13:37:19.706259] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.092 [2024-12-15 13:37:19.706401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.092 [2024-12-15 13:37:19.706422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.092 [2024-12-15 13:37:19.710020] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.092 [2024-12-15 13:37:19.710169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.092 [2024-12-15 13:37:19.710189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.092 [2024-12-15 13:37:19.713892] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.092 [2024-12-15 13:37:19.714087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.092 [2024-12-15 13:37:19.714107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.092 [2024-12-15 13:37:19.717499] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.092 [2024-12-15 13:37:19.717744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.092 [2024-12-15 13:37:19.717770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.092 [2024-12-15 13:37:19.721242] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.092 [2024-12-15 13:37:19.721406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.092 [2024-12-15 13:37:19.721426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.092 [2024-12-15 13:37:19.725027] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.092 [2024-12-15 13:37:19.725137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.092 [2024-12-15 13:37:19.725157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.092 [2024-12-15 13:37:19.728749] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.092 [2024-12-15 13:37:19.728835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.092 [2024-12-15 13:37:19.728855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.092 [2024-12-15 13:37:19.732344] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.092 [2024-12-15 13:37:19.732445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.092 [2024-12-15 13:37:19.732464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.092 [2024-12-15 13:37:19.736005] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.092 [2024-12-15 13:37:19.736147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.092 [2024-12-15 13:37:19.736168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.092 [2024-12-15 13:37:19.739708] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.092 [2024-12-15 13:37:19.739832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.092 [2024-12-15 13:37:19.739852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.092 [2024-12-15 13:37:19.743662] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.092 [2024-12-15 13:37:19.743863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.092 [2024-12-15 13:37:19.743889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.092 [2024-12-15 13:37:19.747448] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.092 [2024-12-15 13:37:19.747690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.092 [2024-12-15 13:37:19.747711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.092 [2024-12-15 13:37:19.751472] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.092 [2024-12-15 13:37:19.751662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.092 [2024-12-15 13:37:19.751697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.092 [2024-12-15 13:37:19.755658] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.092 [2024-12-15 13:37:19.755792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.092 [2024-12-15 13:37:19.755814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.092 [2024-12-15 13:37:19.759926] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.092 [2024-12-15 13:37:19.760084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.092 [2024-12-15 13:37:19.760106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.092 [2024-12-15 13:37:19.764173] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.092 [2024-12-15 13:37:19.764267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.092 [2024-12-15 13:37:19.764289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.092 [2024-12-15 13:37:19.768560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.092 [2024-12-15 13:37:19.768768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.092 [2024-12-15 13:37:19.768792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.092 [2024-12-15 13:37:19.772738] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.092 [2024-12-15 13:37:19.772889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.092 [2024-12-15 13:37:19.772912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.092 [2024-12-15 13:37:19.777073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.353 [2024-12-15 13:37:19.777249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.353 [2024-12-15 13:37:19.777269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.353 [2024-12-15 13:37:19.781195] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.353 [2024-12-15 13:37:19.781389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.353 [2024-12-15 13:37:19.781415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.353 [2024-12-15 13:37:19.785319] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.353 [2024-12-15 13:37:19.785475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.353 [2024-12-15 13:37:19.785503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.353 [2024-12-15 13:37:19.789456] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.353 [2024-12-15 13:37:19.789552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.353 [2024-12-15 13:37:19.789613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.353 [2024-12-15 13:37:19.793614] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.353 [2024-12-15 13:37:19.793698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.353 [2024-12-15 13:37:19.793720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.353 [2024-12-15 13:37:19.797674] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.353 [2024-12-15 13:37:19.797762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.353 [2024-12-15 13:37:19.797785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.353 [2024-12-15 13:37:19.801457] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.353 [2024-12-15 13:37:19.801646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.353 [2024-12-15 13:37:19.801692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.353 [2024-12-15 13:37:19.805160] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.353 [2024-12-15 13:37:19.805298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.353 [2024-12-15 13:37:19.805324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.353 [2024-12-15 13:37:19.809062] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.353 [2024-12-15 13:37:19.809253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.353 [2024-12-15 13:37:19.809273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.353 [2024-12-15 13:37:19.812786] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.353 [2024-12-15 13:37:19.813024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.353 [2024-12-15 13:37:19.813044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.353 [2024-12-15 13:37:19.816544] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.353 [2024-12-15 13:37:19.816711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.353 [2024-12-15 13:37:19.816737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.353 [2024-12-15 13:37:19.820428] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.353 [2024-12-15 13:37:19.820503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.353 [2024-12-15 13:37:19.820523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.353 [2024-12-15 13:37:19.824074] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.353 [2024-12-15 13:37:19.824176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.353 [2024-12-15 13:37:19.824196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.353 [2024-12-15 13:37:19.827775] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.353 [2024-12-15 13:37:19.827862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.353 [2024-12-15 13:37:19.827882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.353 [2024-12-15 13:37:19.831424] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.353 [2024-12-15 13:37:19.831574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.354 [2024-12-15 13:37:19.831594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.354 [2024-12-15 13:37:19.835104] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.354 [2024-12-15 13:37:19.835253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.354 [2024-12-15 13:37:19.835273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.354 [2024-12-15 13:37:19.838926] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.354 [2024-12-15 13:37:19.839120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.354 [2024-12-15 13:37:19.839141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.354 [2024-12-15 13:37:19.842712] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.354 [2024-12-15 13:37:19.842959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.354 [2024-12-15 13:37:19.842984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.354 [2024-12-15 13:37:19.846387] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.354 [2024-12-15 13:37:19.846496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.354 [2024-12-15 13:37:19.846516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.354 [2024-12-15 13:37:19.850143] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.354 [2024-12-15 13:37:19.850251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.354 [2024-12-15 13:37:19.850271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.354 [2024-12-15 13:37:19.853943] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.354 [2024-12-15 13:37:19.854032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.354 [2024-12-15 13:37:19.854052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.354 [2024-12-15 13:37:19.857648] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.354 [2024-12-15 13:37:19.857725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.354 [2024-12-15 13:37:19.857745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.354 [2024-12-15 13:37:19.861295] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.354 [2024-12-15 13:37:19.861437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.354 [2024-12-15 13:37:19.861457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.354 [2024-12-15 13:37:19.864970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.354 [2024-12-15 13:37:19.865112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.354 [2024-12-15 13:37:19.865133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.354 [2024-12-15 13:37:19.868875] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.354 [2024-12-15 13:37:19.869074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.354 [2024-12-15 13:37:19.869100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.354 [2024-12-15 13:37:19.872640] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.354 [2024-12-15 13:37:19.872844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.354 [2024-12-15 13:37:19.872873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.354 [2024-12-15 13:37:19.876241] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.354 [2024-12-15 13:37:19.876390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.354 [2024-12-15 13:37:19.876410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.354 [2024-12-15 13:37:19.879916] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.354 [2024-12-15 13:37:19.880005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.354 [2024-12-15 13:37:19.880025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.354 [2024-12-15 13:37:19.883572] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.354 [2024-12-15 13:37:19.883706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.354 [2024-12-15 13:37:19.883726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.354 [2024-12-15 13:37:19.887273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.354 [2024-12-15 13:37:19.887362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.354 [2024-12-15 13:37:19.887382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.354 [2024-12-15 13:37:19.891083] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.354 [2024-12-15 13:37:19.891241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.354 [2024-12-15 13:37:19.891262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.354 [2024-12-15 13:37:19.894780] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.354 [2024-12-15 13:37:19.894931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.354 [2024-12-15 13:37:19.894951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.354 [2024-12-15 13:37:19.898657] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.354 [2024-12-15 13:37:19.898854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.354 [2024-12-15 13:37:19.898879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.354 [2024-12-15 13:37:19.902396] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.354 [2024-12-15 13:37:19.902585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.354 [2024-12-15 13:37:19.902605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.354 [2024-12-15 13:37:19.906010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.354 [2024-12-15 13:37:19.906172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.354 [2024-12-15 13:37:19.906192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.354 [2024-12-15 13:37:19.909699] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.354 [2024-12-15 13:37:19.909807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.354 [2024-12-15 13:37:19.909827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.354 [2024-12-15 13:37:19.913292] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.354 [2024-12-15 13:37:19.913398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.354 [2024-12-15 13:37:19.913417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.354 [2024-12-15 13:37:19.917044] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.354 [2024-12-15 13:37:19.917148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.354 [2024-12-15 13:37:19.917169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.354 [2024-12-15 13:37:19.920716] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.355 [2024-12-15 13:37:19.920859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.355 [2024-12-15 13:37:19.920879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.355 [2024-12-15 13:37:19.924429] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.355 [2024-12-15 13:37:19.924528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.355 [2024-12-15 13:37:19.924548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.355 [2024-12-15 13:37:19.928217] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.355 [2024-12-15 13:37:19.928411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.355 [2024-12-15 13:37:19.928431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.355 [2024-12-15 13:37:19.931898] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.355 [2024-12-15 13:37:19.932102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.355 [2024-12-15 13:37:19.932122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.355 [2024-12-15 13:37:19.935661] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.355 [2024-12-15 13:37:19.935838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.355 [2024-12-15 13:37:19.935858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.355 [2024-12-15 13:37:19.939369] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.355 [2024-12-15 13:37:19.939463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.355 [2024-12-15 13:37:19.939483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.355 [2024-12-15 13:37:19.943059] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.355 [2024-12-15 13:37:19.943153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.355 [2024-12-15 13:37:19.943172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.355 [2024-12-15 13:37:19.946837] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.355 [2024-12-15 13:37:19.946928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.355 [2024-12-15 13:37:19.946948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.355 [2024-12-15 13:37:19.950641] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.355 [2024-12-15 13:37:19.950800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.355 [2024-12-15 13:37:19.950820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.355 [2024-12-15 13:37:19.954332] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.355 [2024-12-15 13:37:19.954463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.355 [2024-12-15 13:37:19.954483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.355 [2024-12-15 13:37:19.958245] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.355 [2024-12-15 13:37:19.958436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.355 [2024-12-15 13:37:19.958456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.355 [2024-12-15 13:37:19.961957] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.355 [2024-12-15 13:37:19.962196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.355 [2024-12-15 13:37:19.962222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.355 [2024-12-15 13:37:19.965520] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.355 [2024-12-15 13:37:19.965674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.355 [2024-12-15 13:37:19.965695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.355 [2024-12-15 13:37:19.969158] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.355 [2024-12-15 13:37:19.969250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.355 [2024-12-15 13:37:19.969269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.355 [2024-12-15 13:37:19.972819] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.355 [2024-12-15 13:37:19.972913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.355 [2024-12-15 13:37:19.972933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.355 [2024-12-15 13:37:19.976625] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.355 [2024-12-15 13:37:19.976722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.355 [2024-12-15 13:37:19.976741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.355 [2024-12-15 13:37:19.980303] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.355 [2024-12-15 13:37:19.980449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.355 [2024-12-15 13:37:19.980468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.355 [2024-12-15 13:37:19.984011] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.355 [2024-12-15 13:37:19.984149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.355 [2024-12-15 13:37:19.984169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.355 [2024-12-15 13:37:19.987780] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.355 [2024-12-15 13:37:19.987970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.355 [2024-12-15 13:37:19.987991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.355 [2024-12-15 13:37:19.991555] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.355 [2024-12-15 13:37:19.991827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.355 [2024-12-15 13:37:19.991849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.355 [2024-12-15 13:37:19.995266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.355 [2024-12-15 13:37:19.995433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.355 [2024-12-15 13:37:19.995454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.355 [2024-12-15 13:37:19.998933] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.355 [2024-12-15 13:37:19.999065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.355 [2024-12-15 13:37:19.999085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.355 [2024-12-15 13:37:20.003207] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.355 [2024-12-15 13:37:20.003322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.355 [2024-12-15 13:37:20.003344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.356 [2024-12-15 13:37:20.007405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.356 [2024-12-15 13:37:20.007502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.356 [2024-12-15 13:37:20.007523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.356 [2024-12-15 13:37:20.011657] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.356 [2024-12-15 13:37:20.011833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.356 [2024-12-15 13:37:20.011857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.356 [2024-12-15 13:37:20.015806] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.356 [2024-12-15 13:37:20.015964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.356 [2024-12-15 13:37:20.015985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.356 [2024-12-15 13:37:20.019925] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.356 [2024-12-15 13:37:20.020166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.356 [2024-12-15 13:37:20.020193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.356 [2024-12-15 13:37:20.024340] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.356 [2024-12-15 13:37:20.024645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.356 [2024-12-15 13:37:20.024715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.356 [2024-12-15 13:37:20.029000] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.356 [2024-12-15 13:37:20.029170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.356 [2024-12-15 13:37:20.029191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.356 [2024-12-15 13:37:20.033060] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.356 [2024-12-15 13:37:20.033175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.356 [2024-12-15 13:37:20.033196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.356 [2024-12-15 13:37:20.037019] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.356 [2024-12-15 13:37:20.037112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.356 [2024-12-15 13:37:20.037132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.616 [2024-12-15 13:37:20.040960] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.616 [2024-12-15 13:37:20.041066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.616 [2024-12-15 13:37:20.041088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.616 [2024-12-15 13:37:20.044904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.616 [2024-12-15 13:37:20.045049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.616 [2024-12-15 13:37:20.045070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.616 [2024-12-15 13:37:20.049340] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.616 [2024-12-15 13:37:20.049504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.616 [2024-12-15 13:37:20.049534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.616 [2024-12-15 13:37:20.054739] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.616 [2024-12-15 13:37:20.054924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.616 [2024-12-15 13:37:20.054952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.616 [2024-12-15 13:37:20.058791] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.616 [2024-12-15 13:37:20.059010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.616 [2024-12-15 13:37:20.059031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.616 [2024-12-15 13:37:20.062671] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.616 [2024-12-15 13:37:20.062837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.616 [2024-12-15 13:37:20.062857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.616 [2024-12-15 13:37:20.066513] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.616 [2024-12-15 13:37:20.066647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.616 [2024-12-15 13:37:20.066668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.616 [2024-12-15 13:37:20.070466] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.616 [2024-12-15 13:37:20.070557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.616 [2024-12-15 13:37:20.070577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.616 [2024-12-15 13:37:20.074263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.616 [2024-12-15 13:37:20.074370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.616 [2024-12-15 13:37:20.074391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.616 [2024-12-15 13:37:20.078336] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.616 [2024-12-15 13:37:20.078485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.616 [2024-12-15 13:37:20.078505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.616 [2024-12-15 13:37:20.082186] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.616 [2024-12-15 13:37:20.082311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.616 [2024-12-15 13:37:20.082330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.616 [2024-12-15 13:37:20.086104] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.616 [2024-12-15 13:37:20.086297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.616 [2024-12-15 13:37:20.086317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.616 [2024-12-15 13:37:20.090076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.616 [2024-12-15 13:37:20.090261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.616 [2024-12-15 13:37:20.090281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.616 [2024-12-15 13:37:20.093908] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.616 [2024-12-15 13:37:20.094070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.616 [2024-12-15 13:37:20.094090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.616 [2024-12-15 13:37:20.097742] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.616 [2024-12-15 13:37:20.097853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.616 [2024-12-15 13:37:20.097905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.616 [2024-12-15 13:37:20.101495] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.616 [2024-12-15 13:37:20.101659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.616 [2024-12-15 13:37:20.101680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.616 [2024-12-15 13:37:20.105287] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.616 [2024-12-15 13:37:20.105392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.616 [2024-12-15 13:37:20.105411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.616 [2024-12-15 13:37:20.109179] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.616 [2024-12-15 13:37:20.109320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.617 [2024-12-15 13:37:20.109339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.617 [2024-12-15 13:37:20.113028] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.617 [2024-12-15 13:37:20.113154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.617 [2024-12-15 13:37:20.113178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.617 [2024-12-15 13:37:20.117010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.617 [2024-12-15 13:37:20.117189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.617 [2024-12-15 13:37:20.117214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.617 [2024-12-15 13:37:20.120890] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.617 [2024-12-15 13:37:20.121120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.617 [2024-12-15 13:37:20.121161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.617 [2024-12-15 13:37:20.124572] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.617 [2024-12-15 13:37:20.124691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.617 [2024-12-15 13:37:20.124711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.617 [2024-12-15 13:37:20.128343] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.617 [2024-12-15 13:37:20.128451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.617 [2024-12-15 13:37:20.128470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.617 [2024-12-15 13:37:20.132187] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.617 [2024-12-15 13:37:20.132275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.617 [2024-12-15 13:37:20.132294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.617 [2024-12-15 13:37:20.135991] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.617 [2024-12-15 13:37:20.136106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.617 [2024-12-15 13:37:20.136126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.617 [2024-12-15 13:37:20.139821] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.617 [2024-12-15 13:37:20.140007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.617 [2024-12-15 13:37:20.140028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.617 [2024-12-15 13:37:20.143561] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.617 [2024-12-15 13:37:20.143778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.617 [2024-12-15 13:37:20.143798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.617 [2024-12-15 13:37:20.147342] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.617 [2024-12-15 13:37:20.147517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.617 [2024-12-15 13:37:20.147537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.617 [2024-12-15 13:37:20.151103] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.617 [2024-12-15 13:37:20.151218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.617 [2024-12-15 13:37:20.151237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.617 [2024-12-15 13:37:20.154889] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.617 [2024-12-15 13:37:20.154989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.617 [2024-12-15 13:37:20.155009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.617 [2024-12-15 13:37:20.158703] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.617 [2024-12-15 13:37:20.158865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.617 [2024-12-15 13:37:20.158884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.617 [2024-12-15 13:37:20.162565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.617 [2024-12-15 13:37:20.162692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.617 [2024-12-15 13:37:20.162712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.617 [2024-12-15 13:37:20.166227] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.617 [2024-12-15 13:37:20.166334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.617 [2024-12-15 13:37:20.166353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.617 [2024-12-15 13:37:20.170129] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.617 [2024-12-15 13:37:20.170283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.617 [2024-12-15 13:37:20.170308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.617 [2024-12-15 13:37:20.173910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.617 [2024-12-15 13:37:20.174101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.617 [2024-12-15 13:37:20.174121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.617 [2024-12-15 13:37:20.177655] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.617 [2024-12-15 13:37:20.177814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.617 [2024-12-15 13:37:20.177833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.617 [2024-12-15 13:37:20.181363] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.617 [2024-12-15 13:37:20.181484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.617 [2024-12-15 13:37:20.181503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.617 [2024-12-15 13:37:20.185041] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.617 [2024-12-15 13:37:20.185133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.617 [2024-12-15 13:37:20.185152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.617 [2024-12-15 13:37:20.188782] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.617 [2024-12-15 13:37:20.188937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.617 [2024-12-15 13:37:20.188957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.617 [2024-12-15 13:37:20.192546] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.617 [2024-12-15 13:37:20.192657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.617 [2024-12-15 13:37:20.192676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.617 [2024-12-15 13:37:20.196309] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.617 [2024-12-15 13:37:20.196411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.617 [2024-12-15 13:37:20.196430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.618 [2024-12-15 13:37:20.200128] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.618 [2024-12-15 13:37:20.200292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.618 [2024-12-15 13:37:20.200312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.618 [2024-12-15 13:37:20.203896] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.618 [2024-12-15 13:37:20.204104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.618 [2024-12-15 13:37:20.204124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.618 [2024-12-15 13:37:20.207633] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.618 [2024-12-15 13:37:20.207806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.618 [2024-12-15 13:37:20.207825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.618 [2024-12-15 13:37:20.211356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.618 [2024-12-15 13:37:20.211467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.618 [2024-12-15 13:37:20.211486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.618 [2024-12-15 13:37:20.215075] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.618 [2024-12-15 13:37:20.215187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.618 [2024-12-15 13:37:20.215206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.618 [2024-12-15 13:37:20.218874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.618 [2024-12-15 13:37:20.219024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.618 [2024-12-15 13:37:20.219043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.618 [2024-12-15 13:37:20.222634] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.618 [2024-12-15 13:37:20.222732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.618 [2024-12-15 13:37:20.222751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.618 [2024-12-15 13:37:20.226320] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.618 [2024-12-15 13:37:20.226410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.618 [2024-12-15 13:37:20.226429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.618 [2024-12-15 13:37:20.230138] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.618 [2024-12-15 13:37:20.230306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.618 [2024-12-15 13:37:20.230325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.618 [2024-12-15 13:37:20.233852] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.618 [2024-12-15 13:37:20.234068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.618 [2024-12-15 13:37:20.234087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.618 [2024-12-15 13:37:20.237678] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.618 [2024-12-15 13:37:20.237855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.618 [2024-12-15 13:37:20.237875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.618 [2024-12-15 13:37:20.241466] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.618 [2024-12-15 13:37:20.241596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.618 [2024-12-15 13:37:20.241628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.618 [2024-12-15 13:37:20.245273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.618 [2024-12-15 13:37:20.245348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.618 [2024-12-15 13:37:20.245368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.618 [2024-12-15 13:37:20.249142] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.618 [2024-12-15 13:37:20.249273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.618 [2024-12-15 13:37:20.249298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.618 [2024-12-15 13:37:20.253039] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.618 [2024-12-15 13:37:20.253130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.618 [2024-12-15 13:37:20.253149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.618 [2024-12-15 13:37:20.256842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.618 [2024-12-15 13:37:20.256915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.618 [2024-12-15 13:37:20.256933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.618 [2024-12-15 13:37:20.260704] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.618 [2024-12-15 13:37:20.260858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.618 [2024-12-15 13:37:20.260878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.618 [2024-12-15 13:37:20.264384] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.618 [2024-12-15 13:37:20.264535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.618 [2024-12-15 13:37:20.264553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.618 [2024-12-15 13:37:20.268243] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.618 [2024-12-15 13:37:20.268416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.618 [2024-12-15 13:37:20.268436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.618 [2024-12-15 13:37:20.271978] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.618 [2024-12-15 13:37:20.272109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.618 [2024-12-15 13:37:20.272129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.618 [2024-12-15 13:37:20.275743] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.618 [2024-12-15 13:37:20.275836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.618 [2024-12-15 13:37:20.275856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.618 [2024-12-15 13:37:20.279439] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.618 [2024-12-15 13:37:20.279582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.618 [2024-12-15 13:37:20.279628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.618 [2024-12-15 13:37:20.283188] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.618 [2024-12-15 13:37:20.283278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.618 [2024-12-15 13:37:20.283298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.619 [2024-12-15 13:37:20.286879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.619 [2024-12-15 13:37:20.286968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.619 [2024-12-15 13:37:20.286988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.619 [2024-12-15 13:37:20.290753] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.619 [2024-12-15 13:37:20.290922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.619 [2024-12-15 13:37:20.290942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.619 [2024-12-15 13:37:20.294548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.619 [2024-12-15 13:37:20.294735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.619 [2024-12-15 13:37:20.294754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.619 [2024-12-15 13:37:20.298427] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.619 [2024-12-15 13:37:20.298614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.619 [2024-12-15 13:37:20.298650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.619 [2024-12-15 13:37:20.302230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.619 [2024-12-15 13:37:20.302403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.619 [2024-12-15 13:37:20.302422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.879 [2024-12-15 13:37:20.306010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.879 [2024-12-15 13:37:20.306117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.879 [2024-12-15 13:37:20.306136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.879 [2024-12-15 13:37:20.309916] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.879 [2024-12-15 13:37:20.310092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.879 [2024-12-15 13:37:20.310112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.879 [2024-12-15 13:37:20.313766] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.879 [2024-12-15 13:37:20.313862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.879 [2024-12-15 13:37:20.313898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.879 [2024-12-15 13:37:20.317484] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.879 [2024-12-15 13:37:20.317612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.879 [2024-12-15 13:37:20.317633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.879 [2024-12-15 13:37:20.321412] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.879 [2024-12-15 13:37:20.321613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.879 [2024-12-15 13:37:20.321634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.879 [2024-12-15 13:37:20.325106] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.879 [2024-12-15 13:37:20.325277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.879 [2024-12-15 13:37:20.325296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.879 [2024-12-15 13:37:20.328972] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.879 [2024-12-15 13:37:20.329141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.879 [2024-12-15 13:37:20.329166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.879 [2024-12-15 13:37:20.332743] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.879 [2024-12-15 13:37:20.332841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.879 [2024-12-15 13:37:20.332859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.879 [2024-12-15 13:37:20.336378] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.879 [2024-12-15 13:37:20.336472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.879 [2024-12-15 13:37:20.336491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.879 [2024-12-15 13:37:20.340142] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.879 [2024-12-15 13:37:20.340309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.879 [2024-12-15 13:37:20.340329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.879 [2024-12-15 13:37:20.343974] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.879 [2024-12-15 13:37:20.344084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.879 [2024-12-15 13:37:20.344103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.879 [2024-12-15 13:37:20.347695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.879 [2024-12-15 13:37:20.347793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.879 [2024-12-15 13:37:20.347811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.879 [2024-12-15 13:37:20.351470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.879 [2024-12-15 13:37:20.351650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.879 [2024-12-15 13:37:20.351669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.879 [2024-12-15 13:37:20.355173] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.879 [2024-12-15 13:37:20.355338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.879 [2024-12-15 13:37:20.355358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.879 [2024-12-15 13:37:20.358942] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.879 [2024-12-15 13:37:20.359117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.879 [2024-12-15 13:37:20.359136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.879 [2024-12-15 13:37:20.362746] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.879 [2024-12-15 13:37:20.362844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.879 [2024-12-15 13:37:20.362863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.879 [2024-12-15 13:37:20.366533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.879 [2024-12-15 13:37:20.366624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.879 [2024-12-15 13:37:20.366656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.879 [2024-12-15 13:37:20.370262] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.879 [2024-12-15 13:37:20.370413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.879 [2024-12-15 13:37:20.370433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.879 [2024-12-15 13:37:20.374036] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.879 [2024-12-15 13:37:20.374125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.879 [2024-12-15 13:37:20.374144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.879 [2024-12-15 13:37:20.377765] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.879 [2024-12-15 13:37:20.377856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.879 [2024-12-15 13:37:20.377875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.879 [2024-12-15 13:37:20.381405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.879 [2024-12-15 13:37:20.381615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.879 [2024-12-15 13:37:20.381635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.879 [2024-12-15 13:37:20.385343] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.880 [2024-12-15 13:37:20.385511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.880 [2024-12-15 13:37:20.385535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.880 [2024-12-15 13:37:20.389162] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.880 [2024-12-15 13:37:20.389335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.880 [2024-12-15 13:37:20.389361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.880 [2024-12-15 13:37:20.393266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.880 [2024-12-15 13:37:20.393434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.880 [2024-12-15 13:37:20.393458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.880 [2024-12-15 13:37:20.397185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.880 [2024-12-15 13:37:20.397266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.880 [2024-12-15 13:37:20.397285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.880 [2024-12-15 13:37:20.401462] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.880 [2024-12-15 13:37:20.401635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.880 [2024-12-15 13:37:20.401657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.880 [2024-12-15 13:37:20.405671] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.880 [2024-12-15 13:37:20.405765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.880 [2024-12-15 13:37:20.405787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.880 [2024-12-15 13:37:20.409949] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.880 [2024-12-15 13:37:20.410087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.880 [2024-12-15 13:37:20.410107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.880 [2024-12-15 13:37:20.414194] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.880 [2024-12-15 13:37:20.414369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.880 [2024-12-15 13:37:20.414389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.880 [2024-12-15 13:37:20.418318] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.880 [2024-12-15 13:37:20.418483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.880 [2024-12-15 13:37:20.418502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.880 [2024-12-15 13:37:20.422500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.880 [2024-12-15 13:37:20.422753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.880 [2024-12-15 13:37:20.422780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.880 [2024-12-15 13:37:20.426640] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.880 [2024-12-15 13:37:20.426770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.880 [2024-12-15 13:37:20.426792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.880 [2024-12-15 13:37:20.430617] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.880 [2024-12-15 13:37:20.430731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.880 [2024-12-15 13:37:20.430751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.880 [2024-12-15 13:37:20.434658] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.880 [2024-12-15 13:37:20.434857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.880 [2024-12-15 13:37:20.434878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.880 [2024-12-15 13:37:20.438806] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.880 [2024-12-15 13:37:20.438891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.880 [2024-12-15 13:37:20.438912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.880 [2024-12-15 13:37:20.442754] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.880 [2024-12-15 13:37:20.442847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.880 [2024-12-15 13:37:20.442866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.880 [2024-12-15 13:37:20.446703] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.880 [2024-12-15 13:37:20.446883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.880 [2024-12-15 13:37:20.446902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.880 [2024-12-15 13:37:20.450526] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.880 [2024-12-15 13:37:20.450724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.880 [2024-12-15 13:37:20.450774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.880 [2024-12-15 13:37:20.454352] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.880 [2024-12-15 13:37:20.454541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.880 [2024-12-15 13:37:20.454560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.880 [2024-12-15 13:37:20.458333] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.880 [2024-12-15 13:37:20.458494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.880 [2024-12-15 13:37:20.458545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.880 [2024-12-15 13:37:20.462214] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.880 [2024-12-15 13:37:20.462310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.880 [2024-12-15 13:37:20.462329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.880 [2024-12-15 13:37:20.466217] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.880 [2024-12-15 13:37:20.466362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.880 [2024-12-15 13:37:20.466382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.880 [2024-12-15 13:37:20.470141] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.880 [2024-12-15 13:37:20.470249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.880 [2024-12-15 13:37:20.470269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.880 [2024-12-15 13:37:20.474068] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.880 [2024-12-15 13:37:20.474179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.880 [2024-12-15 13:37:20.474199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.881 [2024-12-15 13:37:20.478197] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.881 [2024-12-15 13:37:20.478374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.881 [2024-12-15 13:37:20.478394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.881 [2024-12-15 13:37:20.482147] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.881 [2024-12-15 13:37:20.482322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.881 [2024-12-15 13:37:20.482342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.881 [2024-12-15 13:37:20.486069] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.881 [2024-12-15 13:37:20.486238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.881 [2024-12-15 13:37:20.486258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.881 [2024-12-15 13:37:20.490011] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.881 [2024-12-15 13:37:20.490192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.881 [2024-12-15 13:37:20.490211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.881 [2024-12-15 13:37:20.493799] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.881 [2024-12-15 13:37:20.493894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.881 [2024-12-15 13:37:20.493928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.881 [2024-12-15 13:37:20.497995] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.881 [2024-12-15 13:37:20.498138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.881 [2024-12-15 13:37:20.498157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.881 [2024-12-15 13:37:20.501932] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.881 [2024-12-15 13:37:20.502067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.881 [2024-12-15 13:37:20.502087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.881 [2024-12-15 13:37:20.505705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.881 [2024-12-15 13:37:20.505793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.881 [2024-12-15 13:37:20.505814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.881 [2024-12-15 13:37:20.509582] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.881 [2024-12-15 13:37:20.509776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.881 [2024-12-15 13:37:20.509798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.881 [2024-12-15 13:37:20.513518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.881 [2024-12-15 13:37:20.513776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.881 [2024-12-15 13:37:20.513799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.881 [2024-12-15 13:37:20.517397] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.881 [2024-12-15 13:37:20.517617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.881 [2024-12-15 13:37:20.517638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.881 [2024-12-15 13:37:20.521278] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.881 [2024-12-15 13:37:20.521388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.881 [2024-12-15 13:37:20.521408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.881 [2024-12-15 13:37:20.525060] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.881 [2024-12-15 13:37:20.525158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.881 [2024-12-15 13:37:20.525178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.881 [2024-12-15 13:37:20.529019] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.881 [2024-12-15 13:37:20.529180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.881 [2024-12-15 13:37:20.529200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.881 [2024-12-15 13:37:20.533180] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.881 [2024-12-15 13:37:20.533287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.881 [2024-12-15 13:37:20.533306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.881 [2024-12-15 13:37:20.537049] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.881 [2024-12-15 13:37:20.537131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.881 [2024-12-15 13:37:20.537151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.881 [2024-12-15 13:37:20.541060] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.881 [2024-12-15 13:37:20.541245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.881 [2024-12-15 13:37:20.541266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.881 [2024-12-15 13:37:20.544898] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.881 [2024-12-15 13:37:20.545092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.881 [2024-12-15 13:37:20.545117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.881 [2024-12-15 13:37:20.548865] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.881 [2024-12-15 13:37:20.549038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.881 [2024-12-15 13:37:20.549064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.881 [2024-12-15 13:37:20.552978] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.881 [2024-12-15 13:37:20.553071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.881 [2024-12-15 13:37:20.553090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.881 [2024-12-15 13:37:20.556782] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.881 [2024-12-15 13:37:20.556859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.881 [2024-12-15 13:37:20.556878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.881 [2024-12-15 13:37:20.560682] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.881 [2024-12-15 13:37:20.560820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.881 [2024-12-15 13:37:20.560845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.881 [2024-12-15 13:37:20.564536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:14.881 [2024-12-15 13:37:20.564633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.881 [2024-12-15 13:37:20.564653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.141 [2024-12-15 13:37:20.568490] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.141 [2024-12-15 13:37:20.568573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.141 [2024-12-15 13:37:20.568592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.141 [2024-12-15 13:37:20.572561] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.141 [2024-12-15 13:37:20.572745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.141 [2024-12-15 13:37:20.572770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.141 [2024-12-15 13:37:20.576388] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.141 [2024-12-15 13:37:20.576564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.141 [2024-12-15 13:37:20.576585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.141 [2024-12-15 13:37:20.580230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.141 [2024-12-15 13:37:20.580400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.141 [2024-12-15 13:37:20.580419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.141 [2024-12-15 13:37:20.584301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.141 [2024-12-15 13:37:20.584477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.141 [2024-12-15 13:37:20.584496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.141 [2024-12-15 13:37:20.588181] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.141 [2024-12-15 13:37:20.588271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.141 [2024-12-15 13:37:20.588290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.141 [2024-12-15 13:37:20.592120] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.141 [2024-12-15 13:37:20.592297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.141 [2024-12-15 13:37:20.592317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.141 [2024-12-15 13:37:20.596063] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.141 [2024-12-15 13:37:20.596160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.141 [2024-12-15 13:37:20.596212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.141 [2024-12-15 13:37:20.600122] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.141 [2024-12-15 13:37:20.600229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.141 [2024-12-15 13:37:20.600247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.141 [2024-12-15 13:37:20.604089] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.141 [2024-12-15 13:37:20.604256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.141 [2024-12-15 13:37:20.604275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.141 [2024-12-15 13:37:20.607935] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.142 [2024-12-15 13:37:20.608142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.142 [2024-12-15 13:37:20.608160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.142 [2024-12-15 13:37:20.611938] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.142 [2024-12-15 13:37:20.612139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.142 [2024-12-15 13:37:20.612159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.142 [2024-12-15 13:37:20.615847] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.142 [2024-12-15 13:37:20.615977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.142 [2024-12-15 13:37:20.615998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.142 [2024-12-15 13:37:20.619466] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.142 [2024-12-15 13:37:20.619558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.142 [2024-12-15 13:37:20.619578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.142 [2024-12-15 13:37:20.623255] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.142 [2024-12-15 13:37:20.623396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.142 [2024-12-15 13:37:20.623415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.142 [2024-12-15 13:37:20.627041] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.142 [2024-12-15 13:37:20.627134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.142 [2024-12-15 13:37:20.627154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.142 [2024-12-15 13:37:20.630770] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.142 [2024-12-15 13:37:20.630858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.142 [2024-12-15 13:37:20.630878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.142 [2024-12-15 13:37:20.634516] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.142 [2024-12-15 13:37:20.634725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.142 [2024-12-15 13:37:20.634745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.142 [2024-12-15 13:37:20.638364] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.142 [2024-12-15 13:37:20.638537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.142 [2024-12-15 13:37:20.638556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.142 [2024-12-15 13:37:20.642191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.142 [2024-12-15 13:37:20.642371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.142 [2024-12-15 13:37:20.642390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.142 [2024-12-15 13:37:20.645865] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.142 [2024-12-15 13:37:20.646053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.142 [2024-12-15 13:37:20.646071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.142 [2024-12-15 13:37:20.649529] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.142 [2024-12-15 13:37:20.649678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.142 [2024-12-15 13:37:20.649698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.142 [2024-12-15 13:37:20.653279] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.142 [2024-12-15 13:37:20.653439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.142 [2024-12-15 13:37:20.653458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.142 [2024-12-15 13:37:20.657030] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.142 [2024-12-15 13:37:20.657138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.142 [2024-12-15 13:37:20.657157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.142 [2024-12-15 13:37:20.660734] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.142 [2024-12-15 13:37:20.660831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.142 [2024-12-15 13:37:20.660850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.142 [2024-12-15 13:37:20.664448] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.142 [2024-12-15 13:37:20.664649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.142 [2024-12-15 13:37:20.664669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.142 [2024-12-15 13:37:20.668148] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.142 [2024-12-15 13:37:20.668355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.142 [2024-12-15 13:37:20.668374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.142 [2024-12-15 13:37:20.671978] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.142 [2024-12-15 13:37:20.672165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.142 [2024-12-15 13:37:20.672186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.142 [2024-12-15 13:37:20.675748] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.142 [2024-12-15 13:37:20.675858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.142 [2024-12-15 13:37:20.675878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.142 [2024-12-15 13:37:20.679451] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.142 [2024-12-15 13:37:20.679550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.142 [2024-12-15 13:37:20.679569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.142 [2024-12-15 13:37:20.683202] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.142 [2024-12-15 13:37:20.683345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.142 [2024-12-15 13:37:20.683364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.142 [2024-12-15 13:37:20.686978] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.142 [2024-12-15 13:37:20.687090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.142 [2024-12-15 13:37:20.687108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.142 [2024-12-15 13:37:20.690691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.142 [2024-12-15 13:37:20.690789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.142 [2024-12-15 13:37:20.690809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.142 [2024-12-15 13:37:20.694550] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.142 [2024-12-15 13:37:20.694742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.143 [2024-12-15 13:37:20.694763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.143 [2024-12-15 13:37:20.698311] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.143 [2024-12-15 13:37:20.698472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.143 [2024-12-15 13:37:20.698492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.143 [2024-12-15 13:37:20.702193] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.143 [2024-12-15 13:37:20.702377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.143 [2024-12-15 13:37:20.702396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.143 [2024-12-15 13:37:20.705959] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.143 [2024-12-15 13:37:20.706091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.143 [2024-12-15 13:37:20.706110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.143 [2024-12-15 13:37:20.709676] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.143 [2024-12-15 13:37:20.709830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.143 [2024-12-15 13:37:20.709851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.143 [2024-12-15 13:37:20.713409] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.143 [2024-12-15 13:37:20.713594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.143 [2024-12-15 13:37:20.713627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.143 [2024-12-15 13:37:20.717220] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.143 [2024-12-15 13:37:20.717308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.143 [2024-12-15 13:37:20.717327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.143 [2024-12-15 13:37:20.721059] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.143 [2024-12-15 13:37:20.721152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.143 [2024-12-15 13:37:20.721172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.143 [2024-12-15 13:37:20.724855] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.143 [2024-12-15 13:37:20.725037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.143 [2024-12-15 13:37:20.725057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.143 [2024-12-15 13:37:20.728616] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.143 [2024-12-15 13:37:20.728806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.143 [2024-12-15 13:37:20.728826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.143 [2024-12-15 13:37:20.732413] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x938280) with pdu=0x2000190fef90 00:23:15.143 [2024-12-15 13:37:20.732608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.143 [2024-12-15 13:37:20.732640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.143 00:23:15.143 Latency(us) 00:23:15.143 [2024-12-15T13:37:20.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.143 [2024-12-15T13:37:20.833Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:15.143 nvme0n1 : 2.00 8063.33 1007.92 0.00 0.00 1979.85 1325.61 9175.04 00:23:15.143 [2024-12-15T13:37:20.833Z] =================================================================================================================== 00:23:15.143 [2024-12-15T13:37:20.833Z] Total : 8063.33 1007.92 0.00 0.00 1979.85 1325.61 9175.04 00:23:15.143 0 00:23:15.143 13:37:20 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:15.143 13:37:20 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:15.143 13:37:20 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:15.143 | .driver_specific 00:23:15.143 | .nvme_error 00:23:15.143 | .status_code 00:23:15.143 | .command_transient_transport_error' 00:23:15.143 13:37:20 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:15.401 13:37:21 -- host/digest.sh@71 -- # (( 520 > 0 )) 00:23:15.401 13:37:21 -- host/digest.sh@73 -- # killprocess 98007 00:23:15.401 13:37:21 -- common/autotest_common.sh@936 -- # '[' -z 98007 ']' 00:23:15.401 13:37:21 -- common/autotest_common.sh@940 -- # kill -0 98007 00:23:15.401 13:37:21 -- common/autotest_common.sh@941 -- # uname 00:23:15.401 13:37:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:15.401 13:37:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98007 00:23:15.401 13:37:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:15.401 13:37:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:15.401 13:37:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98007' 00:23:15.401 killing process with pid 98007 00:23:15.401 Received shutdown signal, test time was about 2.000000 seconds 00:23:15.401 00:23:15.401 Latency(us) 00:23:15.401 [2024-12-15T13:37:21.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.401 [2024-12-15T13:37:21.091Z] =================================================================================================================== 00:23:15.401 [2024-12-15T13:37:21.091Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:15.401 13:37:21 -- common/autotest_common.sh@955 -- # kill 98007 00:23:15.401 13:37:21 -- common/autotest_common.sh@960 -- # wait 98007 00:23:15.660 13:37:21 -- host/digest.sh@115 -- # killprocess 97692 00:23:15.660 13:37:21 -- common/autotest_common.sh@936 -- # '[' -z 97692 ']' 00:23:15.660 13:37:21 -- common/autotest_common.sh@940 -- # kill -0 97692 00:23:15.660 13:37:21 -- common/autotest_common.sh@941 -- # uname 00:23:15.660 13:37:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:15.660 13:37:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97692 00:23:15.660 13:37:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:15.660 killing process with pid 97692 00:23:15.660 13:37:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:15.660 13:37:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97692' 00:23:15.660 13:37:21 -- common/autotest_common.sh@955 -- # kill 97692 00:23:15.660 13:37:21 -- common/autotest_common.sh@960 -- # wait 97692 00:23:15.918 00:23:15.918 real 0m18.165s 00:23:15.918 user 0m34.509s 00:23:15.918 sys 0m4.736s 00:23:15.918 ************************************ 00:23:15.918 END TEST nvmf_digest_error 00:23:15.918 13:37:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:15.918 13:37:21 -- common/autotest_common.sh@10 -- # set +x 00:23:15.918 ************************************ 00:23:15.918 13:37:21 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:23:15.918 13:37:21 -- host/digest.sh@139 -- # nvmftestfini 00:23:15.918 13:37:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:15.918 13:37:21 -- nvmf/common.sh@116 -- # sync 00:23:15.918 13:37:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:15.918 13:37:21 -- nvmf/common.sh@119 -- # set +e 00:23:15.918 13:37:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:15.918 13:37:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:15.918 rmmod nvme_tcp 00:23:15.918 rmmod nvme_fabrics 00:23:15.918 rmmod nvme_keyring 00:23:15.918 13:37:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:16.177 13:37:21 -- nvmf/common.sh@123 -- # set -e 00:23:16.177 13:37:21 -- nvmf/common.sh@124 -- # return 0 00:23:16.177 13:37:21 -- nvmf/common.sh@477 -- # '[' -n 97692 ']' 00:23:16.177 13:37:21 -- nvmf/common.sh@478 -- # killprocess 97692 00:23:16.177 13:37:21 -- common/autotest_common.sh@936 -- # '[' -z 97692 ']' 00:23:16.177 13:37:21 -- common/autotest_common.sh@940 -- # kill -0 97692 00:23:16.177 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (97692) - No such process 00:23:16.177 Process with pid 97692 is not found 00:23:16.177 13:37:21 -- common/autotest_common.sh@963 -- # echo 'Process with pid 97692 is not found' 00:23:16.177 13:37:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:16.177 13:37:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:16.177 13:37:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:16.177 13:37:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:16.177 13:37:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:16.177 13:37:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.177 13:37:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:16.177 13:37:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.177 13:37:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:16.177 00:23:16.177 real 0m36.244s 00:23:16.177 user 1m7.235s 00:23:16.177 sys 0m9.725s 00:23:16.177 ************************************ 00:23:16.177 END TEST nvmf_digest 00:23:16.177 ************************************ 00:23:16.177 13:37:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:16.177 13:37:21 -- common/autotest_common.sh@10 -- # set +x 00:23:16.177 13:37:21 -- nvmf/nvmf.sh@110 -- # [[ 1 -eq 1 ]] 00:23:16.177 13:37:21 -- nvmf/nvmf.sh@110 -- # [[ tcp == \t\c\p ]] 00:23:16.177 13:37:21 -- nvmf/nvmf.sh@112 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:16.177 13:37:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:16.177 13:37:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:16.177 13:37:21 -- common/autotest_common.sh@10 -- # set +x 00:23:16.177 ************************************ 00:23:16.177 START TEST nvmf_mdns_discovery 00:23:16.177 ************************************ 00:23:16.177 13:37:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:16.177 * Looking for test storage... 00:23:16.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:16.177 13:37:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:16.177 13:37:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:16.177 13:37:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:16.437 13:37:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:16.437 13:37:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:16.437 13:37:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:16.437 13:37:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:16.437 13:37:21 -- scripts/common.sh@335 -- # IFS=.-: 00:23:16.437 13:37:21 -- scripts/common.sh@335 -- # read -ra ver1 00:23:16.437 13:37:21 -- scripts/common.sh@336 -- # IFS=.-: 00:23:16.437 13:37:21 -- scripts/common.sh@336 -- # read -ra ver2 00:23:16.437 13:37:21 -- scripts/common.sh@337 -- # local 'op=<' 00:23:16.437 13:37:21 -- scripts/common.sh@339 -- # ver1_l=2 00:23:16.437 13:37:21 -- scripts/common.sh@340 -- # ver2_l=1 00:23:16.437 13:37:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:16.437 13:37:21 -- scripts/common.sh@343 -- # case "$op" in 00:23:16.437 13:37:21 -- scripts/common.sh@344 -- # : 1 00:23:16.437 13:37:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:16.437 13:37:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:16.437 13:37:21 -- scripts/common.sh@364 -- # decimal 1 00:23:16.437 13:37:21 -- scripts/common.sh@352 -- # local d=1 00:23:16.437 13:37:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:16.437 13:37:21 -- scripts/common.sh@354 -- # echo 1 00:23:16.437 13:37:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:16.437 13:37:21 -- scripts/common.sh@365 -- # decimal 2 00:23:16.437 13:37:21 -- scripts/common.sh@352 -- # local d=2 00:23:16.437 13:37:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:16.437 13:37:21 -- scripts/common.sh@354 -- # echo 2 00:23:16.437 13:37:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:16.437 13:37:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:16.437 13:37:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:16.437 13:37:21 -- scripts/common.sh@367 -- # return 0 00:23:16.437 13:37:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:16.437 13:37:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:16.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.437 --rc genhtml_branch_coverage=1 00:23:16.437 --rc genhtml_function_coverage=1 00:23:16.437 --rc genhtml_legend=1 00:23:16.437 --rc geninfo_all_blocks=1 00:23:16.437 --rc geninfo_unexecuted_blocks=1 00:23:16.437 00:23:16.437 ' 00:23:16.437 13:37:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:16.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.437 --rc genhtml_branch_coverage=1 00:23:16.437 --rc genhtml_function_coverage=1 00:23:16.437 --rc genhtml_legend=1 00:23:16.437 --rc geninfo_all_blocks=1 00:23:16.437 --rc geninfo_unexecuted_blocks=1 00:23:16.437 00:23:16.437 ' 00:23:16.437 13:37:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:16.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.437 --rc genhtml_branch_coverage=1 00:23:16.437 --rc genhtml_function_coverage=1 00:23:16.437 --rc genhtml_legend=1 00:23:16.437 --rc geninfo_all_blocks=1 00:23:16.437 --rc geninfo_unexecuted_blocks=1 00:23:16.437 00:23:16.437 ' 00:23:16.437 13:37:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:16.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.437 --rc genhtml_branch_coverage=1 00:23:16.437 --rc genhtml_function_coverage=1 00:23:16.437 --rc genhtml_legend=1 00:23:16.437 --rc geninfo_all_blocks=1 00:23:16.437 --rc geninfo_unexecuted_blocks=1 00:23:16.437 00:23:16.437 ' 00:23:16.437 13:37:21 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:16.437 13:37:21 -- nvmf/common.sh@7 -- # uname -s 00:23:16.437 13:37:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:16.437 13:37:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:16.437 13:37:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:16.437 13:37:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:16.437 13:37:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:16.437 13:37:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:16.437 13:37:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:16.437 13:37:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:16.437 13:37:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:16.437 13:37:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:16.437 13:37:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:23:16.437 13:37:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:23:16.437 13:37:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:16.437 13:37:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:16.437 13:37:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:16.437 13:37:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:16.437 13:37:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:16.437 13:37:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:16.437 13:37:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:16.437 13:37:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.437 13:37:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.438 13:37:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.438 13:37:21 -- paths/export.sh@5 -- # export PATH 00:23:16.438 13:37:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.438 13:37:21 -- nvmf/common.sh@46 -- # : 0 00:23:16.438 13:37:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:16.438 13:37:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:16.438 13:37:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:16.438 13:37:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:16.438 13:37:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:16.438 13:37:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:16.438 13:37:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:16.438 13:37:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:16.438 13:37:21 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:23:16.438 13:37:21 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:23:16.438 13:37:21 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:16.438 13:37:21 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:16.438 13:37:21 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:23:16.438 13:37:21 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:16.438 13:37:21 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:23:16.438 13:37:21 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:23:16.438 13:37:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:16.438 13:37:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:16.438 13:37:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:16.438 13:37:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:16.438 13:37:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:16.438 13:37:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.438 13:37:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:16.438 13:37:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.438 13:37:21 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:16.438 13:37:21 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:16.438 13:37:21 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:16.438 13:37:21 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:16.438 13:37:21 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:16.438 13:37:21 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:16.438 13:37:21 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:16.438 13:37:21 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:16.438 13:37:21 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:16.438 13:37:21 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:16.438 13:37:21 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:16.438 13:37:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:16.438 13:37:21 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:16.438 13:37:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:16.438 13:37:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:16.438 13:37:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:16.438 13:37:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:16.438 13:37:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:16.438 13:37:21 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:16.438 13:37:21 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:16.438 Cannot find device "nvmf_tgt_br" 00:23:16.438 13:37:21 -- nvmf/common.sh@154 -- # true 00:23:16.438 13:37:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:16.438 Cannot find device "nvmf_tgt_br2" 00:23:16.438 13:37:21 -- nvmf/common.sh@155 -- # true 00:23:16.438 13:37:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:16.438 13:37:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:16.438 Cannot find device "nvmf_tgt_br" 00:23:16.438 13:37:21 -- nvmf/common.sh@157 -- # true 00:23:16.438 13:37:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:16.438 Cannot find device "nvmf_tgt_br2" 00:23:16.438 13:37:21 -- nvmf/common.sh@158 -- # true 00:23:16.438 13:37:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:16.438 13:37:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:16.438 13:37:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:16.438 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:16.438 13:37:22 -- nvmf/common.sh@161 -- # true 00:23:16.438 13:37:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:16.438 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:16.438 13:37:22 -- nvmf/common.sh@162 -- # true 00:23:16.438 13:37:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:16.438 13:37:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:16.438 13:37:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:16.438 13:37:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:16.438 13:37:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:16.438 13:37:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:16.698 13:37:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:16.698 13:37:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:16.698 13:37:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:16.698 13:37:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:16.698 13:37:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:16.698 13:37:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:16.698 13:37:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:16.698 13:37:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:16.698 13:37:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:16.698 13:37:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:16.698 13:37:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:16.698 13:37:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:16.698 13:37:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:16.698 13:37:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:16.698 13:37:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:16.698 13:37:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:16.698 13:37:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:16.698 13:37:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:16.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:16.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:23:16.698 00:23:16.698 --- 10.0.0.2 ping statistics --- 00:23:16.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.698 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:23:16.698 13:37:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:16.698 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:16.698 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:23:16.698 00:23:16.698 --- 10.0.0.3 ping statistics --- 00:23:16.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.698 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:23:16.698 13:37:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:16.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:16.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:23:16.698 00:23:16.698 --- 10.0.0.1 ping statistics --- 00:23:16.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.698 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:23:16.698 13:37:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:16.698 13:37:22 -- nvmf/common.sh@421 -- # return 0 00:23:16.698 13:37:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:16.698 13:37:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:16.698 13:37:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:16.698 13:37:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:16.698 13:37:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:16.698 13:37:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:16.698 13:37:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:16.698 13:37:22 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:16.698 13:37:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:16.698 13:37:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:16.698 13:37:22 -- common/autotest_common.sh@10 -- # set +x 00:23:16.698 13:37:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:16.698 13:37:22 -- nvmf/common.sh@469 -- # nvmfpid=98306 00:23:16.698 13:37:22 -- nvmf/common.sh@470 -- # waitforlisten 98306 00:23:16.698 13:37:22 -- common/autotest_common.sh@829 -- # '[' -z 98306 ']' 00:23:16.698 13:37:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.698 13:37:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:16.698 13:37:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.698 13:37:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:16.698 13:37:22 -- common/autotest_common.sh@10 -- # set +x 00:23:16.698 [2024-12-15 13:37:22.347925] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:16.698 [2024-12-15 13:37:22.348054] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.957 [2024-12-15 13:37:22.482094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.957 [2024-12-15 13:37:22.546750] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:16.957 [2024-12-15 13:37:22.546907] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:16.957 [2024-12-15 13:37:22.546918] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:16.957 [2024-12-15 13:37:22.546925] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:16.957 [2024-12-15 13:37:22.546964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.893 13:37:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:17.893 13:37:23 -- common/autotest_common.sh@862 -- # return 0 00:23:17.893 13:37:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:17.893 13:37:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:17.893 13:37:23 -- common/autotest_common.sh@10 -- # set +x 00:23:17.893 13:37:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:17.893 13:37:23 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:23:17.893 13:37:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.893 13:37:23 -- common/autotest_common.sh@10 -- # set +x 00:23:17.893 13:37:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.893 13:37:23 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:23:17.893 13:37:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.893 13:37:23 -- common/autotest_common.sh@10 -- # set +x 00:23:17.893 13:37:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.893 13:37:23 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:17.893 13:37:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.893 13:37:23 -- common/autotest_common.sh@10 -- # set +x 00:23:17.893 [2024-12-15 13:37:23.459966] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:17.893 13:37:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.893 13:37:23 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:17.893 13:37:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.893 13:37:23 -- common/autotest_common.sh@10 -- # set +x 00:23:17.893 [2024-12-15 13:37:23.468103] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:17.893 13:37:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.893 13:37:23 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:17.893 13:37:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.893 13:37:23 -- common/autotest_common.sh@10 -- # set +x 00:23:17.893 null0 00:23:17.894 13:37:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.894 13:37:23 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:17.894 13:37:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.894 13:37:23 -- common/autotest_common.sh@10 -- # set +x 00:23:17.894 null1 00:23:17.894 13:37:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.894 13:37:23 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:23:17.894 13:37:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.894 13:37:23 -- common/autotest_common.sh@10 -- # set +x 00:23:17.894 null2 00:23:17.894 13:37:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.894 13:37:23 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:23:17.894 13:37:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.894 13:37:23 -- common/autotest_common.sh@10 -- # set +x 00:23:17.894 null3 00:23:17.894 13:37:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.894 13:37:23 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:23:17.894 13:37:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.894 13:37:23 -- common/autotest_common.sh@10 -- # set +x 00:23:17.894 13:37:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.894 13:37:23 -- host/mdns_discovery.sh@47 -- # hostpid=98356 00:23:17.894 13:37:23 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:17.894 13:37:23 -- host/mdns_discovery.sh@48 -- # waitforlisten 98356 /tmp/host.sock 00:23:17.894 13:37:23 -- common/autotest_common.sh@829 -- # '[' -z 98356 ']' 00:23:17.894 13:37:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:17.894 13:37:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:17.894 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:17.894 13:37:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:17.894 13:37:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:17.894 13:37:23 -- common/autotest_common.sh@10 -- # set +x 00:23:17.894 [2024-12-15 13:37:23.571327] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:17.894 [2024-12-15 13:37:23.571445] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98356 ] 00:23:18.153 [2024-12-15 13:37:23.711072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.153 [2024-12-15 13:37:23.787261] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:18.153 [2024-12-15 13:37:23.787467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.090 13:37:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:19.090 13:37:24 -- common/autotest_common.sh@862 -- # return 0 00:23:19.090 13:37:24 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:23:19.090 13:37:24 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:23:19.090 13:37:24 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:23:19.090 13:37:24 -- host/mdns_discovery.sh@57 -- # avahipid=98386 00:23:19.090 13:37:24 -- host/mdns_discovery.sh@58 -- # sleep 1 00:23:19.090 13:37:24 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:23:19.090 13:37:24 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:23:19.090 Process 1065 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:23:19.090 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:23:19.090 Successfully dropped root privileges. 00:23:19.090 avahi-daemon 0.8 starting up. 00:23:19.090 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:23:19.090 Successfully called chroot(). 00:23:19.090 Successfully dropped remaining capabilities. 00:23:19.090 No service file found in /etc/avahi/services. 00:23:20.024 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:20.024 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:23:20.024 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:20.024 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:23:20.024 Network interface enumeration completed. 00:23:20.024 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:23:20.024 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:23:20.024 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 00:23:20.024 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:23:20.024 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 2079025344. 00:23:20.024 13:37:25 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:20.024 13:37:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.025 13:37:25 -- common/autotest_common.sh@10 -- # set +x 00:23:20.025 13:37:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.025 13:37:25 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:20.025 13:37:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.025 13:37:25 -- common/autotest_common.sh@10 -- # set +x 00:23:20.025 13:37:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.025 13:37:25 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:23:20.025 13:37:25 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:23:20.025 13:37:25 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:20.025 13:37:25 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:20.025 13:37:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.025 13:37:25 -- common/autotest_common.sh@10 -- # set +x 00:23:20.025 13:37:25 -- host/mdns_discovery.sh@68 -- # xargs 00:23:20.025 13:37:25 -- host/mdns_discovery.sh@68 -- # sort 00:23:20.283 13:37:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.283 13:37:25 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:23:20.283 13:37:25 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:23:20.283 13:37:25 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:20.283 13:37:25 -- host/mdns_discovery.sh@64 -- # sort 00:23:20.283 13:37:25 -- host/mdns_discovery.sh@64 -- # xargs 00:23:20.283 13:37:25 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:20.283 13:37:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.283 13:37:25 -- common/autotest_common.sh@10 -- # set +x 00:23:20.283 13:37:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.283 13:37:25 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:23:20.283 13:37:25 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:20.283 13:37:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.283 13:37:25 -- common/autotest_common.sh@10 -- # set +x 00:23:20.283 13:37:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.283 13:37:25 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:23:20.283 13:37:25 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:20.283 13:37:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.283 13:37:25 -- common/autotest_common.sh@10 -- # set +x 00:23:20.283 13:37:25 -- host/mdns_discovery.sh@68 -- # sort 00:23:20.283 13:37:25 -- host/mdns_discovery.sh@68 -- # xargs 00:23:20.283 13:37:25 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:20.283 13:37:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.283 13:37:25 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:23:20.283 13:37:25 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:23:20.283 13:37:25 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:20.283 13:37:25 -- host/mdns_discovery.sh@64 -- # sort 00:23:20.283 13:37:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.283 13:37:25 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:20.283 13:37:25 -- common/autotest_common.sh@10 -- # set +x 00:23:20.283 13:37:25 -- host/mdns_discovery.sh@64 -- # xargs 00:23:20.283 13:37:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.283 13:37:25 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:23:20.283 13:37:25 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:20.283 13:37:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.283 13:37:25 -- common/autotest_common.sh@10 -- # set +x 00:23:20.283 13:37:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.283 13:37:25 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:23:20.283 13:37:25 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:20.283 13:37:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.283 13:37:25 -- common/autotest_common.sh@10 -- # set +x 00:23:20.283 13:37:25 -- host/mdns_discovery.sh@68 -- # sort 00:23:20.283 13:37:25 -- host/mdns_discovery.sh@68 -- # xargs 00:23:20.283 13:37:25 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:20.283 13:37:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.542 13:37:25 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:23:20.542 13:37:25 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:23:20.542 13:37:25 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:20.542 13:37:25 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:20.542 13:37:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.542 13:37:25 -- host/mdns_discovery.sh@64 -- # sort 00:23:20.542 13:37:25 -- common/autotest_common.sh@10 -- # set +x 00:23:20.542 13:37:25 -- host/mdns_discovery.sh@64 -- # xargs 00:23:20.542 [2024-12-15 13:37:26.003477] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:20.542 13:37:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.542 13:37:26 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:23:20.542 13:37:26 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:20.542 13:37:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.542 13:37:26 -- common/autotest_common.sh@10 -- # set +x 00:23:20.542 [2024-12-15 13:37:26.053642] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:20.542 13:37:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.542 13:37:26 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:20.542 13:37:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.542 13:37:26 -- common/autotest_common.sh@10 -- # set +x 00:23:20.542 13:37:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.542 13:37:26 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:23:20.542 13:37:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.542 13:37:26 -- common/autotest_common.sh@10 -- # set +x 00:23:20.542 13:37:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.542 13:37:26 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:23:20.542 13:37:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.542 13:37:26 -- common/autotest_common.sh@10 -- # set +x 00:23:20.542 13:37:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.542 13:37:26 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:23:20.542 13:37:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.542 13:37:26 -- common/autotest_common.sh@10 -- # set +x 00:23:20.542 13:37:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.542 13:37:26 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:23:20.542 13:37:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.542 13:37:26 -- common/autotest_common.sh@10 -- # set +x 00:23:20.542 [2024-12-15 13:37:26.093616] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:23:20.542 13:37:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.542 13:37:26 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:20.542 13:37:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.542 13:37:26 -- common/autotest_common.sh@10 -- # set +x 00:23:20.542 [2024-12-15 13:37:26.101586] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:20.542 13:37:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.542 13:37:26 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=98443 00:23:20.542 13:37:26 -- host/mdns_discovery.sh@125 -- # sleep 5 00:23:20.542 13:37:26 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:23:21.477 [2024-12-15 13:37:26.903482] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:21.477 Established under name 'CDC' 00:23:21.736 [2024-12-15 13:37:27.303492] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:21.736 [2024-12-15 13:37:27.303528] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:21.736 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:21.736 cookie is 0 00:23:21.736 is_local: 1 00:23:21.736 our_own: 0 00:23:21.736 wide_area: 0 00:23:21.736 multicast: 1 00:23:21.736 cached: 1 00:23:21.736 [2024-12-15 13:37:27.403483] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:21.736 [2024-12-15 13:37:27.403517] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:23:21.736 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:21.736 cookie is 0 00:23:21.736 is_local: 1 00:23:21.736 our_own: 0 00:23:21.736 wide_area: 0 00:23:21.736 multicast: 1 00:23:21.736 cached: 1 00:23:22.673 [2024-12-15 13:37:28.314046] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:22.673 [2024-12-15 13:37:28.314091] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:22.673 [2024-12-15 13:37:28.314108] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:22.931 [2024-12-15 13:37:28.400153] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:23:22.931 [2024-12-15 13:37:28.413817] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:22.931 [2024-12-15 13:37:28.413849] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:22.931 [2024-12-15 13:37:28.413888] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:22.931 [2024-12-15 13:37:28.464356] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:22.931 [2024-12-15 13:37:28.464399] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:22.931 [2024-12-15 13:37:28.500574] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:23:22.931 [2024-12-15 13:37:28.555183] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:22.932 [2024-12-15 13:37:28.555211] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:25.469 13:37:31 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:23:25.469 13:37:31 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:25.469 13:37:31 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:25.469 13:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.469 13:37:31 -- common/autotest_common.sh@10 -- # set +x 00:23:25.469 13:37:31 -- host/mdns_discovery.sh@80 -- # sort 00:23:25.469 13:37:31 -- host/mdns_discovery.sh@80 -- # xargs 00:23:25.469 13:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.728 13:37:31 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:23:25.728 13:37:31 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:23:25.728 13:37:31 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:25.728 13:37:31 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:25.728 13:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.728 13:37:31 -- common/autotest_common.sh@10 -- # set +x 00:23:25.728 13:37:31 -- host/mdns_discovery.sh@76 -- # sort 00:23:25.728 13:37:31 -- host/mdns_discovery.sh@76 -- # xargs 00:23:25.728 13:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.728 13:37:31 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:25.728 13:37:31 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:23:25.728 13:37:31 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:25.728 13:37:31 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:25.728 13:37:31 -- host/mdns_discovery.sh@68 -- # sort 00:23:25.728 13:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.728 13:37:31 -- host/mdns_discovery.sh@68 -- # xargs 00:23:25.728 13:37:31 -- common/autotest_common.sh@10 -- # set +x 00:23:25.728 13:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.728 13:37:31 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:25.728 13:37:31 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:23:25.728 13:37:31 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:25.728 13:37:31 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:25.728 13:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.728 13:37:31 -- host/mdns_discovery.sh@64 -- # sort 00:23:25.728 13:37:31 -- common/autotest_common.sh@10 -- # set +x 00:23:25.728 13:37:31 -- host/mdns_discovery.sh@64 -- # xargs 00:23:25.728 13:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.728 13:37:31 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:23:25.728 13:37:31 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:23:25.728 13:37:31 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:25.728 13:37:31 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:25.728 13:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.728 13:37:31 -- common/autotest_common.sh@10 -- # set +x 00:23:25.728 13:37:31 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:25.728 13:37:31 -- host/mdns_discovery.sh@72 -- # xargs 00:23:25.728 13:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.728 13:37:31 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:23:25.728 13:37:31 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:23:25.728 13:37:31 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:25.728 13:37:31 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:25.728 13:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.728 13:37:31 -- common/autotest_common.sh@10 -- # set +x 00:23:25.728 13:37:31 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:25.728 13:37:31 -- host/mdns_discovery.sh@72 -- # xargs 00:23:25.728 13:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.987 13:37:31 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:23:25.987 13:37:31 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:23:25.987 13:37:31 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:25.987 13:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.987 13:37:31 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:25.987 13:37:31 -- common/autotest_common.sh@10 -- # set +x 00:23:25.987 13:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.987 13:37:31 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:25.987 13:37:31 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:23:25.987 13:37:31 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:23:25.987 13:37:31 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:25.987 13:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.987 13:37:31 -- common/autotest_common.sh@10 -- # set +x 00:23:25.987 13:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.987 13:37:31 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:23:25.987 13:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.987 13:37:31 -- common/autotest_common.sh@10 -- # set +x 00:23:25.987 13:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.987 13:37:31 -- host/mdns_discovery.sh@139 -- # sleep 1 00:23:26.923 13:37:32 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:23:26.923 13:37:32 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.923 13:37:32 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:26.923 13:37:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.923 13:37:32 -- common/autotest_common.sh@10 -- # set +x 00:23:26.923 13:37:32 -- host/mdns_discovery.sh@64 -- # sort 00:23:26.923 13:37:32 -- host/mdns_discovery.sh@64 -- # xargs 00:23:26.923 13:37:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.923 13:37:32 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:26.923 13:37:32 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:23:26.923 13:37:32 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:26.923 13:37:32 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:26.923 13:37:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.923 13:37:32 -- common/autotest_common.sh@10 -- # set +x 00:23:26.923 13:37:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.182 13:37:32 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:27.182 13:37:32 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:27.182 13:37:32 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:23:27.182 13:37:32 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:27.182 13:37:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.182 13:37:32 -- common/autotest_common.sh@10 -- # set +x 00:23:27.182 [2024-12-15 13:37:32.632334] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:27.182 [2024-12-15 13:37:32.632765] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:27.182 [2024-12-15 13:37:32.632802] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:27.182 [2024-12-15 13:37:32.632837] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:27.182 [2024-12-15 13:37:32.632851] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:27.182 13:37:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.182 13:37:32 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:23:27.182 13:37:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.182 13:37:32 -- common/autotest_common.sh@10 -- # set +x 00:23:27.182 [2024-12-15 13:37:32.640291] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:27.182 [2024-12-15 13:37:32.640771] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:27.182 [2024-12-15 13:37:32.640827] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:27.182 13:37:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.182 13:37:32 -- host/mdns_discovery.sh@149 -- # sleep 1 00:23:27.182 [2024-12-15 13:37:32.771914] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:23:27.182 [2024-12-15 13:37:32.772097] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:23:27.182 [2024-12-15 13:37:32.829142] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:27.182 [2024-12-15 13:37:32.829184] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:27.182 [2024-12-15 13:37:32.829191] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:27.182 [2024-12-15 13:37:32.829205] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:27.182 [2024-12-15 13:37:32.829267] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:27.182 [2024-12-15 13:37:32.829275] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:27.182 [2024-12-15 13:37:32.829280] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:27.182 [2024-12-15 13:37:32.829291] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:27.441 [2024-12-15 13:37:32.875072] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:27.441 [2024-12-15 13:37:32.875101] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:27.441 [2024-12-15 13:37:32.875152] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:27.441 [2024-12-15 13:37:32.875159] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:28.009 13:37:33 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:23:28.009 13:37:33 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:28.009 13:37:33 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:28.009 13:37:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.009 13:37:33 -- common/autotest_common.sh@10 -- # set +x 00:23:28.009 13:37:33 -- host/mdns_discovery.sh@68 -- # sort 00:23:28.009 13:37:33 -- host/mdns_discovery.sh@68 -- # xargs 00:23:28.009 13:37:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.268 13:37:33 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:28.268 13:37:33 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:23:28.268 13:37:33 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.268 13:37:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.268 13:37:33 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:28.268 13:37:33 -- common/autotest_common.sh@10 -- # set +x 00:23:28.268 13:37:33 -- host/mdns_discovery.sh@64 -- # sort 00:23:28.268 13:37:33 -- host/mdns_discovery.sh@64 -- # xargs 00:23:28.268 13:37:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.268 13:37:33 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:28.268 13:37:33 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:23:28.268 13:37:33 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:28.268 13:37:33 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:28.268 13:37:33 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:28.268 13:37:33 -- host/mdns_discovery.sh@72 -- # xargs 00:23:28.268 13:37:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.268 13:37:33 -- common/autotest_common.sh@10 -- # set +x 00:23:28.268 13:37:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.268 13:37:33 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:28.268 13:37:33 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:23:28.268 13:37:33 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:28.268 13:37:33 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:28.268 13:37:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.268 13:37:33 -- common/autotest_common.sh@10 -- # set +x 00:23:28.268 13:37:33 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:28.268 13:37:33 -- host/mdns_discovery.sh@72 -- # xargs 00:23:28.268 13:37:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.268 13:37:33 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:28.268 13:37:33 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:23:28.268 13:37:33 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:28.268 13:37:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.268 13:37:33 -- common/autotest_common.sh@10 -- # set +x 00:23:28.268 13:37:33 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:28.268 13:37:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.268 13:37:33 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:28.268 13:37:33 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:28.268 13:37:33 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:23:28.268 13:37:33 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:28.268 13:37:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.268 13:37:33 -- common/autotest_common.sh@10 -- # set +x 00:23:28.268 [2024-12-15 13:37:33.937111] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:28.268 [2024-12-15 13:37:33.937170] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:28.268 [2024-12-15 13:37:33.937216] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:28.268 [2024-12-15 13:37:33.937226] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:28.268 13:37:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.268 13:37:33 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:28.268 13:37:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.268 13:37:33 -- common/autotest_common.sh@10 -- # set +x 00:23:28.268 [2024-12-15 13:37:33.944738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.268 [2024-12-15 13:37:33.944779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.268 [2024-12-15 13:37:33.944792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.268 [2024-12-15 13:37:33.944802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.268 [2024-12-15 13:37:33.944812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.268 [2024-12-15 13:37:33.944821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.268 [2024-12-15 13:37:33.944831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.268 [2024-12-15 13:37:33.944840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.268 [2024-12-15 13:37:33.944848] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2431aa0 is same with the state(5) to be set 00:23:28.268 [2024-12-15 13:37:33.945160] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:28.268 [2024-12-15 13:37:33.945199] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:28.268 13:37:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.268 13:37:33 -- host/mdns_discovery.sh@162 -- # sleep 1 00:23:28.268 [2024-12-15 13:37:33.952887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.268 [2024-12-15 13:37:33.952916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.268 [2024-12-15 13:37:33.952943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.268 [2024-12-15 13:37:33.952952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.268 [2024-12-15 13:37:33.952961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.268 [2024-12-15 13:37:33.952969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.268 [2024-12-15 13:37:33.952978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.268 [2024-12-15 13:37:33.953000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.268 [2024-12-15 13:37:33.953008] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c760 is same with the state(5) to be set 00:23:28.268 [2024-12-15 13:37:33.954720] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2431aa0 (9): Bad file descriptor 00:23:28.529 [2024-12-15 13:37:33.962854] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241c760 (9): Bad file descriptor 00:23:28.529 [2024-12-15 13:37:33.964720] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:28.529 [2024-12-15 13:37:33.964837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.529 [2024-12-15 13:37:33.964881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.529 [2024-12-15 13:37:33.964912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2431aa0 with addr=10.0.0.2, port=4420 00:23:28.529 [2024-12-15 13:37:33.964937] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2431aa0 is same with the state(5) to be set 00:23:28.529 [2024-12-15 13:37:33.964968] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2431aa0 (9): Bad file descriptor 00:23:28.529 [2024-12-15 13:37:33.964981] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:28.529 [2024-12-15 13:37:33.964989] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:28.529 [2024-12-15 13:37:33.964998] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:28.529 [2024-12-15 13:37:33.965012] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.529 [2024-12-15 13:37:33.972863] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:28.529 [2024-12-15 13:37:33.972971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.529 [2024-12-15 13:37:33.973011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.529 [2024-12-15 13:37:33.973026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241c760 with addr=10.0.0.3, port=4420 00:23:28.529 [2024-12-15 13:37:33.973034] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c760 is same with the state(5) to be set 00:23:28.529 [2024-12-15 13:37:33.973048] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241c760 (9): Bad file descriptor 00:23:28.529 [2024-12-15 13:37:33.973060] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:28.529 [2024-12-15 13:37:33.973067] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:28.529 [2024-12-15 13:37:33.973075] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:28.529 [2024-12-15 13:37:33.973087] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.529 [2024-12-15 13:37:33.974800] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:28.529 [2024-12-15 13:37:33.974887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.529 [2024-12-15 13:37:33.974927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.529 [2024-12-15 13:37:33.974942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2431aa0 with addr=10.0.0.2, port=4420 00:23:28.529 [2024-12-15 13:37:33.974951] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2431aa0 is same with the state(5) to be set 00:23:28.529 [2024-12-15 13:37:33.974980] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2431aa0 (9): Bad file descriptor 00:23:28.529 [2024-12-15 13:37:33.975008] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:28.529 [2024-12-15 13:37:33.975016] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:28.529 [2024-12-15 13:37:33.975024] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:28.529 [2024-12-15 13:37:33.975037] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.530 [2024-12-15 13:37:33.982942] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:28.530 [2024-12-15 13:37:33.983060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.530 [2024-12-15 13:37:33.983100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.530 [2024-12-15 13:37:33.983114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241c760 with addr=10.0.0.3, port=4420 00:23:28.530 [2024-12-15 13:37:33.983123] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c760 is same with the state(5) to be set 00:23:28.530 [2024-12-15 13:37:33.983136] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241c760 (9): Bad file descriptor 00:23:28.530 [2024-12-15 13:37:33.983163] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:28.530 [2024-12-15 13:37:33.983171] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:28.530 [2024-12-15 13:37:33.983194] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:28.530 [2024-12-15 13:37:33.983223] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.530 [2024-12-15 13:37:33.984862] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:28.530 [2024-12-15 13:37:33.984947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.530 [2024-12-15 13:37:33.984986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.530 [2024-12-15 13:37:33.985015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2431aa0 with addr=10.0.0.2, port=4420 00:23:28.530 [2024-12-15 13:37:33.985040] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2431aa0 is same with the state(5) to be set 00:23:28.530 [2024-12-15 13:37:33.985054] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2431aa0 (9): Bad file descriptor 00:23:28.530 [2024-12-15 13:37:33.985066] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:28.530 [2024-12-15 13:37:33.985074] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:28.530 [2024-12-15 13:37:33.985082] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:28.530 [2024-12-15 13:37:33.985094] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.530 [2024-12-15 13:37:33.993034] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:28.530 [2024-12-15 13:37:33.993145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.530 [2024-12-15 13:37:33.993184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.530 [2024-12-15 13:37:33.993199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241c760 with addr=10.0.0.3, port=4420 00:23:28.530 [2024-12-15 13:37:33.993208] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c760 is same with the state(5) to be set 00:23:28.530 [2024-12-15 13:37:33.993221] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241c760 (9): Bad file descriptor 00:23:28.530 [2024-12-15 13:37:33.993247] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:28.530 [2024-12-15 13:37:33.993255] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:28.530 [2024-12-15 13:37:33.993263] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:28.530 [2024-12-15 13:37:33.993291] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.530 [2024-12-15 13:37:33.994922] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:28.530 [2024-12-15 13:37:33.995008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.530 [2024-12-15 13:37:33.995048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.530 [2024-12-15 13:37:33.995076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2431aa0 with addr=10.0.0.2, port=4420 00:23:28.530 [2024-12-15 13:37:33.995101] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2431aa0 is same with the state(5) to be set 00:23:28.530 [2024-12-15 13:37:33.995130] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2431aa0 (9): Bad file descriptor 00:23:28.530 [2024-12-15 13:37:33.995142] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:28.530 [2024-12-15 13:37:33.995150] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:28.530 [2024-12-15 13:37:33.995158] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:28.530 [2024-12-15 13:37:33.995171] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.530 [2024-12-15 13:37:34.003115] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:28.530 [2024-12-15 13:37:34.003219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.530 [2024-12-15 13:37:34.003259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.530 [2024-12-15 13:37:34.003274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241c760 with addr=10.0.0.3, port=4420 00:23:28.530 [2024-12-15 13:37:34.003282] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c760 is same with the state(5) to be set 00:23:28.530 [2024-12-15 13:37:34.003296] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241c760 (9): Bad file descriptor 00:23:28.530 [2024-12-15 13:37:34.003323] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:28.530 [2024-12-15 13:37:34.003331] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:28.530 [2024-12-15 13:37:34.003339] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:28.530 [2024-12-15 13:37:34.003351] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.530 [2024-12-15 13:37:34.004980] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:28.530 [2024-12-15 13:37:34.005096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.530 [2024-12-15 13:37:34.005136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.530 [2024-12-15 13:37:34.005150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2431aa0 with addr=10.0.0.2, port=4420 00:23:28.530 [2024-12-15 13:37:34.005159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2431aa0 is same with the state(5) to be set 00:23:28.530 [2024-12-15 13:37:34.005172] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2431aa0 (9): Bad file descriptor 00:23:28.530 [2024-12-15 13:37:34.005184] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:28.530 [2024-12-15 13:37:34.005191] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:28.530 [2024-12-15 13:37:34.005199] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:28.530 [2024-12-15 13:37:34.005211] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.530 [2024-12-15 13:37:34.013176] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:28.530 [2024-12-15 13:37:34.013282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.530 [2024-12-15 13:37:34.013321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.530 [2024-12-15 13:37:34.013336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241c760 with addr=10.0.0.3, port=4420 00:23:28.530 [2024-12-15 13:37:34.013344] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c760 is same with the state(5) to be set 00:23:28.530 [2024-12-15 13:37:34.013357] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241c760 (9): Bad file descriptor 00:23:28.530 [2024-12-15 13:37:34.013385] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:28.530 [2024-12-15 13:37:34.013393] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:28.530 [2024-12-15 13:37:34.013401] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:28.530 [2024-12-15 13:37:34.013413] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.530 [2024-12-15 13:37:34.015060] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:28.530 [2024-12-15 13:37:34.015157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.530 [2024-12-15 13:37:34.015196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.530 [2024-12-15 13:37:34.015210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2431aa0 with addr=10.0.0.2, port=4420 00:23:28.530 [2024-12-15 13:37:34.015218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2431aa0 is same with the state(5) to be set 00:23:28.530 [2024-12-15 13:37:34.015231] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2431aa0 (9): Bad file descriptor 00:23:28.530 [2024-12-15 13:37:34.015243] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:28.530 [2024-12-15 13:37:34.015250] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:28.530 [2024-12-15 13:37:34.015258] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:28.530 [2024-12-15 13:37:34.015270] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.530 [2024-12-15 13:37:34.023255] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:28.530 [2024-12-15 13:37:34.023360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.531 [2024-12-15 13:37:34.023399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.531 [2024-12-15 13:37:34.023413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241c760 with addr=10.0.0.3, port=4420 00:23:28.531 [2024-12-15 13:37:34.023422] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c760 is same with the state(5) to be set 00:23:28.531 [2024-12-15 13:37:34.023435] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241c760 (9): Bad file descriptor 00:23:28.531 [2024-12-15 13:37:34.023460] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:28.531 [2024-12-15 13:37:34.023469] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:28.531 [2024-12-15 13:37:34.023476] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:28.531 [2024-12-15 13:37:34.023488] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.531 [2024-12-15 13:37:34.025115] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:28.531 [2024-12-15 13:37:34.025200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.531 [2024-12-15 13:37:34.025239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.531 [2024-12-15 13:37:34.025269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2431aa0 with addr=10.0.0.2, port=4420 00:23:28.531 [2024-12-15 13:37:34.025279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2431aa0 is same with the state(5) to be set 00:23:28.531 [2024-12-15 13:37:34.025292] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2431aa0 (9): Bad file descriptor 00:23:28.531 [2024-12-15 13:37:34.025305] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:28.531 [2024-12-15 13:37:34.025312] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:28.531 [2024-12-15 13:37:34.025320] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:28.531 [2024-12-15 13:37:34.025332] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.531 [2024-12-15 13:37:34.033317] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:28.531 [2024-12-15 13:37:34.033409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.531 [2024-12-15 13:37:34.033449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.531 [2024-12-15 13:37:34.033464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241c760 with addr=10.0.0.3, port=4420 00:23:28.531 [2024-12-15 13:37:34.033473] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c760 is same with the state(5) to be set 00:23:28.531 [2024-12-15 13:37:34.033486] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241c760 (9): Bad file descriptor 00:23:28.531 [2024-12-15 13:37:34.033536] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:28.531 [2024-12-15 13:37:34.033548] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:28.531 [2024-12-15 13:37:34.033556] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:28.531 [2024-12-15 13:37:34.033595] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.531 [2024-12-15 13:37:34.035174] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:28.531 [2024-12-15 13:37:34.035262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.531 [2024-12-15 13:37:34.035303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.531 [2024-12-15 13:37:34.035318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2431aa0 with addr=10.0.0.2, port=4420 00:23:28.531 [2024-12-15 13:37:34.035327] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2431aa0 is same with the state(5) to be set 00:23:28.531 [2024-12-15 13:37:34.035356] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2431aa0 (9): Bad file descriptor 00:23:28.531 [2024-12-15 13:37:34.035369] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:28.531 [2024-12-15 13:37:34.035376] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:28.531 [2024-12-15 13:37:34.035384] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:28.531 [2024-12-15 13:37:34.035397] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.531 [2024-12-15 13:37:34.043382] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:28.531 [2024-12-15 13:37:34.043492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.531 [2024-12-15 13:37:34.043533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.531 [2024-12-15 13:37:34.043547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241c760 with addr=10.0.0.3, port=4420 00:23:28.531 [2024-12-15 13:37:34.043556] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c760 is same with the state(5) to be set 00:23:28.531 [2024-12-15 13:37:34.043585] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241c760 (9): Bad file descriptor 00:23:28.531 [2024-12-15 13:37:34.043658] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:28.531 [2024-12-15 13:37:34.043670] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:28.531 [2024-12-15 13:37:34.043679] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:28.531 [2024-12-15 13:37:34.043693] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.531 [2024-12-15 13:37:34.045234] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:28.531 [2024-12-15 13:37:34.045335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.531 [2024-12-15 13:37:34.045376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.531 [2024-12-15 13:37:34.045391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2431aa0 with addr=10.0.0.2, port=4420 00:23:28.531 [2024-12-15 13:37:34.045400] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2431aa0 is same with the state(5) to be set 00:23:28.531 [2024-12-15 13:37:34.045429] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2431aa0 (9): Bad file descriptor 00:23:28.531 [2024-12-15 13:37:34.045458] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:28.531 [2024-12-15 13:37:34.045467] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:28.531 [2024-12-15 13:37:34.045475] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:28.531 [2024-12-15 13:37:34.045488] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.531 [2024-12-15 13:37:34.053462] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:28.531 [2024-12-15 13:37:34.053591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.531 [2024-12-15 13:37:34.053650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.531 [2024-12-15 13:37:34.053667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241c760 with addr=10.0.0.3, port=4420 00:23:28.531 [2024-12-15 13:37:34.053676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c760 is same with the state(5) to be set 00:23:28.531 [2024-12-15 13:37:34.053692] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241c760 (9): Bad file descriptor 00:23:28.531 [2024-12-15 13:37:34.053723] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:28.531 [2024-12-15 13:37:34.053733] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:28.531 [2024-12-15 13:37:34.053742] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:28.531 [2024-12-15 13:37:34.053756] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.531 [2024-12-15 13:37:34.055309] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:28.531 [2024-12-15 13:37:34.055409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.531 [2024-12-15 13:37:34.055448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.531 [2024-12-15 13:37:34.055463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2431aa0 with addr=10.0.0.2, port=4420 00:23:28.531 [2024-12-15 13:37:34.055471] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2431aa0 is same with the state(5) to be set 00:23:28.531 [2024-12-15 13:37:34.055485] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2431aa0 (9): Bad file descriptor 00:23:28.531 [2024-12-15 13:37:34.055497] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:28.531 [2024-12-15 13:37:34.055504] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:28.531 [2024-12-15 13:37:34.055512] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:28.531 [2024-12-15 13:37:34.055540] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.531 [2024-12-15 13:37:34.063525] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:28.531 [2024-12-15 13:37:34.063658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.531 [2024-12-15 13:37:34.063700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.532 [2024-12-15 13:37:34.063715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241c760 with addr=10.0.0.3, port=4420 00:23:28.532 [2024-12-15 13:37:34.063724] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c760 is same with the state(5) to be set 00:23:28.532 [2024-12-15 13:37:34.063738] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241c760 (9): Bad file descriptor 00:23:28.532 [2024-12-15 13:37:34.063782] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:28.532 [2024-12-15 13:37:34.063807] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:28.532 [2024-12-15 13:37:34.063815] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:28.532 [2024-12-15 13:37:34.063837] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.532 [2024-12-15 13:37:34.065367] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:28.532 [2024-12-15 13:37:34.065464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.532 [2024-12-15 13:37:34.065504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.532 [2024-12-15 13:37:34.065518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2431aa0 with addr=10.0.0.2, port=4420 00:23:28.532 [2024-12-15 13:37:34.065527] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2431aa0 is same with the state(5) to be set 00:23:28.532 [2024-12-15 13:37:34.065540] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2431aa0 (9): Bad file descriptor 00:23:28.532 [2024-12-15 13:37:34.065551] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:28.532 [2024-12-15 13:37:34.065568] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:28.532 [2024-12-15 13:37:34.065593] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:28.532 [2024-12-15 13:37:34.065633] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.532 [2024-12-15 13:37:34.073612] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:28.532 [2024-12-15 13:37:34.073704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.532 [2024-12-15 13:37:34.073746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.532 [2024-12-15 13:37:34.073793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241c760 with addr=10.0.0.3, port=4420 00:23:28.532 [2024-12-15 13:37:34.073803] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241c760 is same with the state(5) to be set 00:23:28.532 [2024-12-15 13:37:34.073818] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241c760 (9): Bad file descriptor 00:23:28.532 [2024-12-15 13:37:34.073847] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:28.532 [2024-12-15 13:37:34.073856] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:28.532 [2024-12-15 13:37:34.073865] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:28.532 [2024-12-15 13:37:34.073879] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.532 [2024-12-15 13:37:34.075425] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:28.532 [2024-12-15 13:37:34.075508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.532 [2024-12-15 13:37:34.075547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.532 [2024-12-15 13:37:34.075561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2431aa0 with addr=10.0.0.2, port=4420 00:23:28.532 [2024-12-15 13:37:34.075569] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2431aa0 is same with the state(5) to be set 00:23:28.532 [2024-12-15 13:37:34.075614] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2431aa0 (9): Bad file descriptor 00:23:28.532 [2024-12-15 13:37:34.075627] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:28.532 [2024-12-15 13:37:34.075647] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:28.532 [2024-12-15 13:37:34.075655] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:28.532 [2024-12-15 13:37:34.075668] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.532 [2024-12-15 13:37:34.075839] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:28.532 [2024-12-15 13:37:34.075857] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:28.532 [2024-12-15 13:37:34.075873] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:28.532 [2024-12-15 13:37:34.075903] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:23:28.532 [2024-12-15 13:37:34.075917] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:28.532 [2024-12-15 13:37:34.075928] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:28.532 [2024-12-15 13:37:34.161917] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:28.532 [2024-12-15 13:37:34.161972] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:29.468 13:37:34 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:23:29.468 13:37:34 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:29.468 13:37:34 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:29.468 13:37:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.469 13:37:34 -- common/autotest_common.sh@10 -- # set +x 00:23:29.469 13:37:34 -- host/mdns_discovery.sh@68 -- # sort 00:23:29.469 13:37:34 -- host/mdns_discovery.sh@68 -- # xargs 00:23:29.469 13:37:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.469 13:37:35 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:29.469 13:37:35 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:23:29.469 13:37:35 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.469 13:37:35 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:29.469 13:37:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.469 13:37:35 -- host/mdns_discovery.sh@64 -- # sort 00:23:29.469 13:37:35 -- host/mdns_discovery.sh@64 -- # xargs 00:23:29.469 13:37:35 -- common/autotest_common.sh@10 -- # set +x 00:23:29.469 13:37:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.469 13:37:35 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:29.469 13:37:35 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:23:29.469 13:37:35 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:29.469 13:37:35 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:29.469 13:37:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.469 13:37:35 -- common/autotest_common.sh@10 -- # set +x 00:23:29.469 13:37:35 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:29.469 13:37:35 -- host/mdns_discovery.sh@72 -- # xargs 00:23:29.469 13:37:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.469 13:37:35 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:23:29.469 13:37:35 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:23:29.469 13:37:35 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:29.469 13:37:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.469 13:37:35 -- common/autotest_common.sh@10 -- # set +x 00:23:29.469 13:37:35 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:29.469 13:37:35 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:29.469 13:37:35 -- host/mdns_discovery.sh@72 -- # xargs 00:23:29.469 13:37:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.727 13:37:35 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:23:29.727 13:37:35 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:23:29.727 13:37:35 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:29.727 13:37:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.727 13:37:35 -- common/autotest_common.sh@10 -- # set +x 00:23:29.727 13:37:35 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:29.727 13:37:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.727 13:37:35 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:29.727 13:37:35 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:29.727 13:37:35 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:23:29.727 13:37:35 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:29.727 13:37:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.727 13:37:35 -- common/autotest_common.sh@10 -- # set +x 00:23:29.727 13:37:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.727 13:37:35 -- host/mdns_discovery.sh@172 -- # sleep 1 00:23:29.727 [2024-12-15 13:37:35.303491] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:30.662 13:37:36 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:23:30.662 13:37:36 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:30.662 13:37:36 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:30.662 13:37:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.662 13:37:36 -- common/autotest_common.sh@10 -- # set +x 00:23:30.662 13:37:36 -- host/mdns_discovery.sh@80 -- # sort 00:23:30.662 13:37:36 -- host/mdns_discovery.sh@80 -- # xargs 00:23:30.662 13:37:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.662 13:37:36 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:23:30.662 13:37:36 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:23:30.662 13:37:36 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:30.662 13:37:36 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:30.662 13:37:36 -- host/mdns_discovery.sh@68 -- # sort 00:23:30.662 13:37:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.662 13:37:36 -- common/autotest_common.sh@10 -- # set +x 00:23:30.662 13:37:36 -- host/mdns_discovery.sh@68 -- # xargs 00:23:30.662 13:37:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.921 13:37:36 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:23:30.921 13:37:36 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:23:30.921 13:37:36 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:30.921 13:37:36 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.921 13:37:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.921 13:37:36 -- common/autotest_common.sh@10 -- # set +x 00:23:30.921 13:37:36 -- host/mdns_discovery.sh@64 -- # sort 00:23:30.921 13:37:36 -- host/mdns_discovery.sh@64 -- # xargs 00:23:30.921 13:37:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.921 13:37:36 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:23:30.921 13:37:36 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:23:30.921 13:37:36 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:30.921 13:37:36 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:30.921 13:37:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.921 13:37:36 -- common/autotest_common.sh@10 -- # set +x 00:23:30.921 13:37:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.921 13:37:36 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:23:30.921 13:37:36 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:23:30.921 13:37:36 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:23:30.921 13:37:36 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:30.921 13:37:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.921 13:37:36 -- common/autotest_common.sh@10 -- # set +x 00:23:30.921 13:37:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.921 13:37:36 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:30.921 13:37:36 -- common/autotest_common.sh@650 -- # local es=0 00:23:30.921 13:37:36 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:30.921 13:37:36 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:30.921 13:37:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:30.921 13:37:36 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:30.921 13:37:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:30.921 13:37:36 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:30.921 13:37:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.921 13:37:36 -- common/autotest_common.sh@10 -- # set +x 00:23:30.921 [2024-12-15 13:37:36.473184] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:23:30.921 2024/12/15 13:37:36 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:30.921 request: 00:23:30.921 { 00:23:30.921 "method": "bdev_nvme_start_mdns_discovery", 00:23:30.921 "params": { 00:23:30.921 "name": "mdns", 00:23:30.921 "svcname": "_nvme-disc._http", 00:23:30.921 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:30.921 } 00:23:30.921 } 00:23:30.921 Got JSON-RPC error response 00:23:30.921 GoRPCClient: error on JSON-RPC call 00:23:30.921 13:37:36 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:30.921 13:37:36 -- common/autotest_common.sh@653 -- # es=1 00:23:30.921 13:37:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:30.921 13:37:36 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:30.921 13:37:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:30.921 13:37:36 -- host/mdns_discovery.sh@183 -- # sleep 5 00:23:31.179 [2024-12-15 13:37:36.861722] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:31.438 [2024-12-15 13:37:36.961719] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:31.438 [2024-12-15 13:37:37.061725] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:31.438 [2024-12-15 13:37:37.061743] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:31.438 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:31.438 cookie is 0 00:23:31.438 is_local: 1 00:23:31.438 our_own: 0 00:23:31.438 wide_area: 0 00:23:31.438 multicast: 1 00:23:31.438 cached: 1 00:23:31.697 [2024-12-15 13:37:37.161728] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:31.697 [2024-12-15 13:37:37.161751] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:23:31.697 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:31.697 cookie is 0 00:23:31.697 is_local: 1 00:23:31.697 our_own: 0 00:23:31.697 wide_area: 0 00:23:31.697 multicast: 1 00:23:31.697 cached: 1 00:23:32.633 [2024-12-15 13:37:38.065766] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:32.633 [2024-12-15 13:37:38.065789] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:32.633 [2024-12-15 13:37:38.065805] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:32.633 [2024-12-15 13:37:38.151873] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:23:32.633 [2024-12-15 13:37:38.165535] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:32.633 [2024-12-15 13:37:38.165554] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:32.633 [2024-12-15 13:37:38.165634] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:32.633 [2024-12-15 13:37:38.212581] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:32.633 [2024-12-15 13:37:38.212639] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:32.633 [2024-12-15 13:37:38.251739] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:23:32.633 [2024-12-15 13:37:38.310446] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:32.633 [2024-12-15 13:37:38.310471] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:35.919 13:37:41 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:23:35.919 13:37:41 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:35.919 13:37:41 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:35.919 13:37:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.919 13:37:41 -- common/autotest_common.sh@10 -- # set +x 00:23:35.919 13:37:41 -- host/mdns_discovery.sh@80 -- # sort 00:23:35.919 13:37:41 -- host/mdns_discovery.sh@80 -- # xargs 00:23:35.919 13:37:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.919 13:37:41 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:23:35.919 13:37:41 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:23:35.919 13:37:41 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:35.919 13:37:41 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:35.919 13:37:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.919 13:37:41 -- host/mdns_discovery.sh@76 -- # sort 00:23:35.919 13:37:41 -- common/autotest_common.sh@10 -- # set +x 00:23:35.919 13:37:41 -- host/mdns_discovery.sh@76 -- # xargs 00:23:35.919 13:37:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.919 13:37:41 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:35.919 13:37:41 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:23:35.919 13:37:41 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:35.919 13:37:41 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:35.919 13:37:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.919 13:37:41 -- common/autotest_common.sh@10 -- # set +x 00:23:35.919 13:37:41 -- host/mdns_discovery.sh@64 -- # sort 00:23:35.919 13:37:41 -- host/mdns_discovery.sh@64 -- # xargs 00:23:36.192 13:37:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.192 13:37:41 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:36.192 13:37:41 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:36.192 13:37:41 -- common/autotest_common.sh@650 -- # local es=0 00:23:36.192 13:37:41 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:36.192 13:37:41 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:36.192 13:37:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:36.192 13:37:41 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:36.192 13:37:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:36.192 13:37:41 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:36.192 13:37:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.192 13:37:41 -- common/autotest_common.sh@10 -- # set +x 00:23:36.192 [2024-12-15 13:37:41.659727] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:23:36.192 2024/12/15 13:37:41 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:36.192 request: 00:23:36.192 { 00:23:36.192 "method": "bdev_nvme_start_mdns_discovery", 00:23:36.192 "params": { 00:23:36.192 "name": "cdc", 00:23:36.192 "svcname": "_nvme-disc._tcp", 00:23:36.192 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:36.192 } 00:23:36.192 } 00:23:36.192 Got JSON-RPC error response 00:23:36.192 GoRPCClient: error on JSON-RPC call 00:23:36.192 13:37:41 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:36.192 13:37:41 -- common/autotest_common.sh@653 -- # es=1 00:23:36.192 13:37:41 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:36.192 13:37:41 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:36.192 13:37:41 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:36.192 13:37:41 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:23:36.192 13:37:41 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:36.192 13:37:41 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:36.192 13:37:41 -- host/mdns_discovery.sh@76 -- # sort 00:23:36.192 13:37:41 -- host/mdns_discovery.sh@76 -- # xargs 00:23:36.192 13:37:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.192 13:37:41 -- common/autotest_common.sh@10 -- # set +x 00:23:36.192 13:37:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.192 13:37:41 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:36.192 13:37:41 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:23:36.192 13:37:41 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:36.192 13:37:41 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:36.192 13:37:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.192 13:37:41 -- common/autotest_common.sh@10 -- # set +x 00:23:36.192 13:37:41 -- host/mdns_discovery.sh@64 -- # xargs 00:23:36.192 13:37:41 -- host/mdns_discovery.sh@64 -- # sort 00:23:36.192 13:37:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.192 13:37:41 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:36.192 13:37:41 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:36.192 13:37:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.192 13:37:41 -- common/autotest_common.sh@10 -- # set +x 00:23:36.192 13:37:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.192 13:37:41 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:23:36.192 13:37:41 -- host/mdns_discovery.sh@197 -- # kill 98356 00:23:36.192 13:37:41 -- host/mdns_discovery.sh@200 -- # wait 98356 00:23:36.461 [2024-12-15 13:37:41.888735] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:36.461 13:37:41 -- host/mdns_discovery.sh@201 -- # kill 98443 00:23:36.461 Got SIGTERM, quitting. 00:23:36.461 13:37:41 -- host/mdns_discovery.sh@202 -- # kill 98386 00:23:36.461 13:37:41 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:23:36.461 13:37:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:36.461 13:37:41 -- nvmf/common.sh@116 -- # sync 00:23:36.461 Got SIGTERM, quitting. 00:23:36.461 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:36.461 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:36.461 avahi-daemon 0.8 exiting. 00:23:36.461 13:37:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:36.461 13:37:42 -- nvmf/common.sh@119 -- # set +e 00:23:36.461 13:37:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:36.461 13:37:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:36.461 rmmod nvme_tcp 00:23:36.461 rmmod nvme_fabrics 00:23:36.461 rmmod nvme_keyring 00:23:36.461 13:37:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:36.461 13:37:42 -- nvmf/common.sh@123 -- # set -e 00:23:36.461 13:37:42 -- nvmf/common.sh@124 -- # return 0 00:23:36.461 13:37:42 -- nvmf/common.sh@477 -- # '[' -n 98306 ']' 00:23:36.461 13:37:42 -- nvmf/common.sh@478 -- # killprocess 98306 00:23:36.461 13:37:42 -- common/autotest_common.sh@936 -- # '[' -z 98306 ']' 00:23:36.461 13:37:42 -- common/autotest_common.sh@940 -- # kill -0 98306 00:23:36.461 13:37:42 -- common/autotest_common.sh@941 -- # uname 00:23:36.461 13:37:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:36.461 13:37:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98306 00:23:36.461 13:37:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:36.461 killing process with pid 98306 00:23:36.461 13:37:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:36.461 13:37:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98306' 00:23:36.461 13:37:42 -- common/autotest_common.sh@955 -- # kill 98306 00:23:36.461 13:37:42 -- common/autotest_common.sh@960 -- # wait 98306 00:23:36.720 13:37:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:36.720 13:37:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:36.720 13:37:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:36.720 13:37:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:36.720 13:37:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:36.720 13:37:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.720 13:37:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:36.720 13:37:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.720 13:37:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:36.720 00:23:36.720 real 0m20.677s 00:23:36.720 user 0m40.335s 00:23:36.720 sys 0m2.044s 00:23:36.720 13:37:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:36.720 ************************************ 00:23:36.720 END TEST nvmf_mdns_discovery 00:23:36.720 ************************************ 00:23:36.720 13:37:42 -- common/autotest_common.sh@10 -- # set +x 00:23:36.979 13:37:42 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:23:36.979 13:37:42 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:36.979 13:37:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:36.980 13:37:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:36.980 13:37:42 -- common/autotest_common.sh@10 -- # set +x 00:23:36.980 ************************************ 00:23:36.980 START TEST nvmf_multipath 00:23:36.980 ************************************ 00:23:36.980 13:37:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:36.980 * Looking for test storage... 00:23:36.980 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:36.980 13:37:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:36.980 13:37:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:36.980 13:37:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:36.980 13:37:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:36.980 13:37:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:36.980 13:37:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:36.980 13:37:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:36.980 13:37:42 -- scripts/common.sh@335 -- # IFS=.-: 00:23:36.980 13:37:42 -- scripts/common.sh@335 -- # read -ra ver1 00:23:36.980 13:37:42 -- scripts/common.sh@336 -- # IFS=.-: 00:23:36.980 13:37:42 -- scripts/common.sh@336 -- # read -ra ver2 00:23:36.980 13:37:42 -- scripts/common.sh@337 -- # local 'op=<' 00:23:36.980 13:37:42 -- scripts/common.sh@339 -- # ver1_l=2 00:23:36.980 13:37:42 -- scripts/common.sh@340 -- # ver2_l=1 00:23:36.980 13:37:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:36.980 13:37:42 -- scripts/common.sh@343 -- # case "$op" in 00:23:36.980 13:37:42 -- scripts/common.sh@344 -- # : 1 00:23:36.980 13:37:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:36.980 13:37:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:36.980 13:37:42 -- scripts/common.sh@364 -- # decimal 1 00:23:36.980 13:37:42 -- scripts/common.sh@352 -- # local d=1 00:23:36.980 13:37:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:36.980 13:37:42 -- scripts/common.sh@354 -- # echo 1 00:23:36.980 13:37:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:36.980 13:37:42 -- scripts/common.sh@365 -- # decimal 2 00:23:36.980 13:37:42 -- scripts/common.sh@352 -- # local d=2 00:23:36.980 13:37:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:36.980 13:37:42 -- scripts/common.sh@354 -- # echo 2 00:23:36.980 13:37:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:36.980 13:37:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:36.980 13:37:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:36.980 13:37:42 -- scripts/common.sh@367 -- # return 0 00:23:36.980 13:37:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:36.980 13:37:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:36.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.980 --rc genhtml_branch_coverage=1 00:23:36.980 --rc genhtml_function_coverage=1 00:23:36.980 --rc genhtml_legend=1 00:23:36.980 --rc geninfo_all_blocks=1 00:23:36.980 --rc geninfo_unexecuted_blocks=1 00:23:36.980 00:23:36.980 ' 00:23:36.980 13:37:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:36.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.980 --rc genhtml_branch_coverage=1 00:23:36.980 --rc genhtml_function_coverage=1 00:23:36.980 --rc genhtml_legend=1 00:23:36.980 --rc geninfo_all_blocks=1 00:23:36.980 --rc geninfo_unexecuted_blocks=1 00:23:36.980 00:23:36.980 ' 00:23:36.980 13:37:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:36.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.980 --rc genhtml_branch_coverage=1 00:23:36.980 --rc genhtml_function_coverage=1 00:23:36.980 --rc genhtml_legend=1 00:23:36.980 --rc geninfo_all_blocks=1 00:23:36.980 --rc geninfo_unexecuted_blocks=1 00:23:36.980 00:23:36.980 ' 00:23:36.980 13:37:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:36.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.980 --rc genhtml_branch_coverage=1 00:23:36.980 --rc genhtml_function_coverage=1 00:23:36.980 --rc genhtml_legend=1 00:23:36.980 --rc geninfo_all_blocks=1 00:23:36.980 --rc geninfo_unexecuted_blocks=1 00:23:36.980 00:23:36.980 ' 00:23:36.980 13:37:42 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:36.980 13:37:42 -- nvmf/common.sh@7 -- # uname -s 00:23:36.980 13:37:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:36.980 13:37:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:36.980 13:37:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:36.980 13:37:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:36.980 13:37:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:36.980 13:37:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:36.980 13:37:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:36.980 13:37:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:36.980 13:37:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:36.980 13:37:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:36.980 13:37:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:23:36.980 13:37:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:23:36.980 13:37:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:36.980 13:37:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:36.980 13:37:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:36.980 13:37:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:36.980 13:37:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:36.980 13:37:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:36.980 13:37:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:36.980 13:37:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.980 13:37:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.980 13:37:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.980 13:37:42 -- paths/export.sh@5 -- # export PATH 00:23:36.980 13:37:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.980 13:37:42 -- nvmf/common.sh@46 -- # : 0 00:23:36.980 13:37:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:36.980 13:37:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:36.980 13:37:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:36.980 13:37:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:36.980 13:37:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:36.980 13:37:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:36.980 13:37:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:36.980 13:37:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:36.980 13:37:42 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:36.980 13:37:42 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:36.980 13:37:42 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:36.980 13:37:42 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:36.980 13:37:42 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:36.980 13:37:42 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:36.980 13:37:42 -- host/multipath.sh@30 -- # nvmftestinit 00:23:36.980 13:37:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:36.980 13:37:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:36.980 13:37:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:36.980 13:37:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:36.980 13:37:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:36.980 13:37:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.980 13:37:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:36.980 13:37:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.980 13:37:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:36.980 13:37:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:36.981 13:37:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:36.981 13:37:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:36.981 13:37:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:36.981 13:37:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:36.981 13:37:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:36.981 13:37:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:36.981 13:37:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:36.981 13:37:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:36.981 13:37:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:36.981 13:37:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:36.981 13:37:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:36.981 13:37:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:36.981 13:37:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:36.981 13:37:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:36.981 13:37:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:36.981 13:37:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:36.981 13:37:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:36.981 13:37:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:37.239 Cannot find device "nvmf_tgt_br" 00:23:37.239 13:37:42 -- nvmf/common.sh@154 -- # true 00:23:37.239 13:37:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:37.239 Cannot find device "nvmf_tgt_br2" 00:23:37.239 13:37:42 -- nvmf/common.sh@155 -- # true 00:23:37.239 13:37:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:37.239 13:37:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:37.239 Cannot find device "nvmf_tgt_br" 00:23:37.239 13:37:42 -- nvmf/common.sh@157 -- # true 00:23:37.239 13:37:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:37.239 Cannot find device "nvmf_tgt_br2" 00:23:37.239 13:37:42 -- nvmf/common.sh@158 -- # true 00:23:37.240 13:37:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:37.240 13:37:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:37.240 13:37:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:37.240 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:37.240 13:37:42 -- nvmf/common.sh@161 -- # true 00:23:37.240 13:37:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:37.240 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:37.240 13:37:42 -- nvmf/common.sh@162 -- # true 00:23:37.240 13:37:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:37.240 13:37:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:37.240 13:37:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:37.240 13:37:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:37.240 13:37:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:37.240 13:37:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:37.240 13:37:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:37.240 13:37:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:37.240 13:37:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:37.240 13:37:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:37.240 13:37:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:37.240 13:37:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:37.240 13:37:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:37.240 13:37:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:37.240 13:37:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:37.240 13:37:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:37.240 13:37:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:37.240 13:37:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:37.240 13:37:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:37.240 13:37:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:37.498 13:37:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:37.498 13:37:42 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:37.498 13:37:42 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:37.498 13:37:42 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:37.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:37.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:23:37.498 00:23:37.498 --- 10.0.0.2 ping statistics --- 00:23:37.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.498 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:23:37.498 13:37:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:37.498 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:37.498 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:23:37.498 00:23:37.498 --- 10.0.0.3 ping statistics --- 00:23:37.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.498 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:23:37.498 13:37:42 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:37.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:37.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:23:37.498 00:23:37.498 --- 10.0.0.1 ping statistics --- 00:23:37.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.498 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:23:37.498 13:37:42 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:37.498 13:37:42 -- nvmf/common.sh@421 -- # return 0 00:23:37.498 13:37:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:37.498 13:37:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:37.498 13:37:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:37.498 13:37:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:37.498 13:37:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:37.498 13:37:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:37.498 13:37:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:37.498 13:37:42 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:23:37.498 13:37:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:37.498 13:37:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:37.498 13:37:42 -- common/autotest_common.sh@10 -- # set +x 00:23:37.498 13:37:42 -- nvmf/common.sh@469 -- # nvmfpid=98958 00:23:37.498 13:37:42 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:37.498 13:37:42 -- nvmf/common.sh@470 -- # waitforlisten 98958 00:23:37.498 13:37:42 -- common/autotest_common.sh@829 -- # '[' -z 98958 ']' 00:23:37.498 13:37:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.498 13:37:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:37.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.498 13:37:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.498 13:37:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:37.498 13:37:42 -- common/autotest_common.sh@10 -- # set +x 00:23:37.498 [2024-12-15 13:37:43.052711] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:37.498 [2024-12-15 13:37:43.052802] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.757 [2024-12-15 13:37:43.191636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:37.757 [2024-12-15 13:37:43.257211] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:37.757 [2024-12-15 13:37:43.257540] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.757 [2024-12-15 13:37:43.257746] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.757 [2024-12-15 13:37:43.257900] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.757 [2024-12-15 13:37:43.258181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.757 [2024-12-15 13:37:43.258193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.694 13:37:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:38.694 13:37:44 -- common/autotest_common.sh@862 -- # return 0 00:23:38.694 13:37:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:38.694 13:37:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:38.694 13:37:44 -- common/autotest_common.sh@10 -- # set +x 00:23:38.694 13:37:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:38.694 13:37:44 -- host/multipath.sh@33 -- # nvmfapp_pid=98958 00:23:38.694 13:37:44 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:38.953 [2024-12-15 13:37:44.392117] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:38.953 13:37:44 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:39.211 Malloc0 00:23:39.211 13:37:44 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:39.470 13:37:45 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:39.728 13:37:45 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:39.986 [2024-12-15 13:37:45.579875] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.986 13:37:45 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:40.245 [2024-12-15 13:37:45.808053] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:40.245 13:37:45 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:40.245 13:37:45 -- host/multipath.sh@44 -- # bdevperf_pid=99066 00:23:40.245 13:37:45 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:40.245 13:37:45 -- host/multipath.sh@47 -- # waitforlisten 99066 /var/tmp/bdevperf.sock 00:23:40.245 13:37:45 -- common/autotest_common.sh@829 -- # '[' -z 99066 ']' 00:23:40.245 13:37:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:40.245 13:37:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:40.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:40.245 13:37:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:40.245 13:37:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:40.245 13:37:45 -- common/autotest_common.sh@10 -- # set +x 00:23:41.622 13:37:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:41.622 13:37:46 -- common/autotest_common.sh@862 -- # return 0 00:23:41.622 13:37:46 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:41.622 13:37:47 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:41.880 Nvme0n1 00:23:41.880 13:37:47 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:42.139 Nvme0n1 00:23:42.398 13:37:47 -- host/multipath.sh@78 -- # sleep 1 00:23:42.398 13:37:47 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:43.333 13:37:48 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:23:43.333 13:37:48 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:43.591 13:37:49 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:43.850 13:37:49 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:23:43.850 13:37:49 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98958 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:43.850 13:37:49 -- host/multipath.sh@65 -- # dtrace_pid=99149 00:23:43.850 13:37:49 -- host/multipath.sh@66 -- # sleep 6 00:23:50.416 13:37:55 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:50.416 13:37:55 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:50.416 13:37:55 -- host/multipath.sh@67 -- # active_port=4421 00:23:50.416 13:37:55 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:50.416 Attaching 4 probes... 00:23:50.416 @path[10.0.0.2, 4421]: 21384 00:23:50.416 @path[10.0.0.2, 4421]: 22217 00:23:50.416 @path[10.0.0.2, 4421]: 22331 00:23:50.416 @path[10.0.0.2, 4421]: 22158 00:23:50.416 @path[10.0.0.2, 4421]: 22026 00:23:50.416 13:37:55 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:50.416 13:37:55 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:50.416 13:37:55 -- host/multipath.sh@69 -- # sed -n 1p 00:23:50.416 13:37:55 -- host/multipath.sh@69 -- # port=4421 00:23:50.416 13:37:55 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:50.416 13:37:55 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:50.416 13:37:55 -- host/multipath.sh@72 -- # kill 99149 00:23:50.416 13:37:55 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:50.416 13:37:55 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:23:50.416 13:37:55 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:50.416 13:37:55 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:50.675 13:37:56 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:23:50.675 13:37:56 -- host/multipath.sh@65 -- # dtrace_pid=99286 00:23:50.675 13:37:56 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98958 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:50.675 13:37:56 -- host/multipath.sh@66 -- # sleep 6 00:23:57.240 13:38:02 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:57.240 13:38:02 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:57.240 13:38:02 -- host/multipath.sh@67 -- # active_port=4420 00:23:57.241 13:38:02 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:57.241 Attaching 4 probes... 00:23:57.241 @path[10.0.0.2, 4420]: 21489 00:23:57.241 @path[10.0.0.2, 4420]: 22105 00:23:57.241 @path[10.0.0.2, 4420]: 22232 00:23:57.241 @path[10.0.0.2, 4420]: 22014 00:23:57.241 @path[10.0.0.2, 4420]: 22439 00:23:57.241 13:38:02 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:57.241 13:38:02 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:57.241 13:38:02 -- host/multipath.sh@69 -- # sed -n 1p 00:23:57.241 13:38:02 -- host/multipath.sh@69 -- # port=4420 00:23:57.241 13:38:02 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:57.241 13:38:02 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:57.241 13:38:02 -- host/multipath.sh@72 -- # kill 99286 00:23:57.241 13:38:02 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:57.241 13:38:02 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:23:57.241 13:38:02 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:57.241 13:38:02 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:57.241 13:38:02 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:23:57.241 13:38:02 -- host/multipath.sh@65 -- # dtrace_pid=99421 00:23:57.241 13:38:02 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98958 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:57.241 13:38:02 -- host/multipath.sh@66 -- # sleep 6 00:24:03.807 13:38:08 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:03.807 13:38:08 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:03.807 13:38:09 -- host/multipath.sh@67 -- # active_port=4421 00:24:03.807 13:38:09 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:03.807 Attaching 4 probes... 00:24:03.807 @path[10.0.0.2, 4421]: 16646 00:24:03.807 @path[10.0.0.2, 4421]: 21907 00:24:03.807 @path[10.0.0.2, 4421]: 21796 00:24:03.807 @path[10.0.0.2, 4421]: 21742 00:24:03.807 @path[10.0.0.2, 4421]: 21871 00:24:03.807 13:38:09 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:03.807 13:38:09 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:03.807 13:38:09 -- host/multipath.sh@69 -- # sed -n 1p 00:24:03.807 13:38:09 -- host/multipath.sh@69 -- # port=4421 00:24:03.807 13:38:09 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:03.807 13:38:09 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:03.807 13:38:09 -- host/multipath.sh@72 -- # kill 99421 00:24:03.807 13:38:09 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:03.807 13:38:09 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:24:03.807 13:38:09 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:03.807 13:38:09 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:04.066 13:38:09 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:24:04.066 13:38:09 -- host/multipath.sh@65 -- # dtrace_pid=99547 00:24:04.066 13:38:09 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98958 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:04.066 13:38:09 -- host/multipath.sh@66 -- # sleep 6 00:24:10.659 13:38:15 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:10.659 13:38:15 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:24:10.659 13:38:15 -- host/multipath.sh@67 -- # active_port= 00:24:10.659 13:38:15 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:10.659 Attaching 4 probes... 00:24:10.659 00:24:10.659 00:24:10.659 00:24:10.659 00:24:10.659 00:24:10.659 13:38:15 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:10.659 13:38:15 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:10.659 13:38:15 -- host/multipath.sh@69 -- # sed -n 1p 00:24:10.659 13:38:15 -- host/multipath.sh@69 -- # port= 00:24:10.659 13:38:15 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:24:10.659 13:38:15 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:24:10.659 13:38:15 -- host/multipath.sh@72 -- # kill 99547 00:24:10.659 13:38:15 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:10.659 13:38:15 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:24:10.659 13:38:15 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:10.659 13:38:16 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:10.918 13:38:16 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:24:10.918 13:38:16 -- host/multipath.sh@65 -- # dtrace_pid=99683 00:24:10.918 13:38:16 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98958 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:10.918 13:38:16 -- host/multipath.sh@66 -- # sleep 6 00:24:17.486 13:38:22 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:17.486 13:38:22 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:17.486 13:38:22 -- host/multipath.sh@67 -- # active_port=4421 00:24:17.486 13:38:22 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:17.486 Attaching 4 probes... 00:24:17.486 @path[10.0.0.2, 4421]: 21088 00:24:17.486 @path[10.0.0.2, 4421]: 21511 00:24:17.486 @path[10.0.0.2, 4421]: 21681 00:24:17.486 @path[10.0.0.2, 4421]: 21531 00:24:17.486 @path[10.0.0.2, 4421]: 22168 00:24:17.486 13:38:22 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:17.486 13:38:22 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:17.486 13:38:22 -- host/multipath.sh@69 -- # sed -n 1p 00:24:17.486 13:38:22 -- host/multipath.sh@69 -- # port=4421 00:24:17.486 13:38:22 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:17.486 13:38:22 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:17.486 13:38:22 -- host/multipath.sh@72 -- # kill 99683 00:24:17.486 13:38:22 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:17.486 13:38:22 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:17.486 [2024-12-15 13:38:22.950560] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.486 [2024-12-15 13:38:22.950656] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.486 [2024-12-15 13:38:22.950673] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.486 [2024-12-15 13:38:22.950682] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.486 [2024-12-15 13:38:22.950701] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.486 [2024-12-15 13:38:22.950709] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.486 [2024-12-15 13:38:22.950718] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.486 [2024-12-15 13:38:22.950726] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.486 [2024-12-15 13:38:22.950734] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.486 [2024-12-15 13:38:22.950742] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.486 [2024-12-15 13:38:22.950750] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.486 [2024-12-15 13:38:22.950758] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.486 [2024-12-15 13:38:22.950767] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.486 [2024-12-15 13:38:22.950775] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.486 [2024-12-15 13:38:22.950783] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.486 [2024-12-15 13:38:22.950791] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.486 [2024-12-15 13:38:22.950799] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.486 [2024-12-15 13:38:22.950807] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.950816] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.950823] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.950831] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.950840] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.950848] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.950856] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.950864] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.950873] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.950881] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.950898] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.950906] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.950914] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.950922] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.950931] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.950940] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.950948] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.950958] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.950967] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.950975] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.950984] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.950992] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951001] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951009] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951017] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951042] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951050] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951058] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951065] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951073] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951081] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951089] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951096] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951104] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951112] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951120] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951128] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951135] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951143] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951151] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951158] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951166] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951172] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951180] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951187] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951194] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951202] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951209] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951217] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951232] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951240] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951247] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951255] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951263] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951270] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951278] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951285] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951293] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951300] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951316] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951323] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951331] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951338] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951346] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951353] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951360] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951368] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951375] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951383] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951398] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951405] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951412] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 [2024-12-15 13:38:22.951427] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d4370 is same with the state(5) to be set 00:24:17.487 13:38:22 -- host/multipath.sh@101 -- # sleep 1 00:24:18.424 13:38:23 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:24:18.424 13:38:23 -- host/multipath.sh@65 -- # dtrace_pid=99813 00:24:18.424 13:38:23 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98958 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:18.424 13:38:23 -- host/multipath.sh@66 -- # sleep 6 00:24:24.987 13:38:29 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:24:24.987 13:38:29 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:24.987 13:38:30 -- host/multipath.sh@67 -- # active_port=4420 00:24:24.987 13:38:30 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:24.987 Attaching 4 probes... 00:24:24.987 @path[10.0.0.2, 4420]: 21740 00:24:24.987 @path[10.0.0.2, 4420]: 21816 00:24:24.987 @path[10.0.0.2, 4420]: 21747 00:24:24.987 @path[10.0.0.2, 4420]: 21927 00:24:24.987 @path[10.0.0.2, 4420]: 21928 00:24:24.988 13:38:30 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:24.988 13:38:30 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:24.988 13:38:30 -- host/multipath.sh@69 -- # sed -n 1p 00:24:24.988 13:38:30 -- host/multipath.sh@69 -- # port=4420 00:24:24.988 13:38:30 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:24:24.988 13:38:30 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:24:24.988 13:38:30 -- host/multipath.sh@72 -- # kill 99813 00:24:24.988 13:38:30 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:24.988 13:38:30 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:24.988 [2024-12-15 13:38:30.539374] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:24.988 13:38:30 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:25.246 13:38:30 -- host/multipath.sh@111 -- # sleep 6 00:24:31.812 13:38:36 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:24:31.812 13:38:36 -- host/multipath.sh@65 -- # dtrace_pid=100011 00:24:31.812 13:38:36 -- host/multipath.sh@66 -- # sleep 6 00:24:31.812 13:38:36 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98958 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:38.387 13:38:42 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:38.387 13:38:42 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:38.387 13:38:43 -- host/multipath.sh@67 -- # active_port=4421 00:24:38.387 13:38:43 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:38.387 Attaching 4 probes... 00:24:38.387 @path[10.0.0.2, 4421]: 21102 00:24:38.387 @path[10.0.0.2, 4421]: 21662 00:24:38.387 @path[10.0.0.2, 4421]: 21557 00:24:38.387 @path[10.0.0.2, 4421]: 21888 00:24:38.387 @path[10.0.0.2, 4421]: 21891 00:24:38.387 13:38:43 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:38.387 13:38:43 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:38.387 13:38:43 -- host/multipath.sh@69 -- # sed -n 1p 00:24:38.387 13:38:43 -- host/multipath.sh@69 -- # port=4421 00:24:38.387 13:38:43 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:38.387 13:38:43 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:38.387 13:38:43 -- host/multipath.sh@72 -- # kill 100011 00:24:38.387 13:38:43 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:38.387 13:38:43 -- host/multipath.sh@114 -- # killprocess 99066 00:24:38.387 13:38:43 -- common/autotest_common.sh@936 -- # '[' -z 99066 ']' 00:24:38.387 13:38:43 -- common/autotest_common.sh@940 -- # kill -0 99066 00:24:38.387 13:38:43 -- common/autotest_common.sh@941 -- # uname 00:24:38.387 13:38:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:38.387 13:38:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 99066 00:24:38.387 killing process with pid 99066 00:24:38.387 13:38:43 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:38.387 13:38:43 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:38.387 13:38:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 99066' 00:24:38.387 13:38:43 -- common/autotest_common.sh@955 -- # kill 99066 00:24:38.387 13:38:43 -- common/autotest_common.sh@960 -- # wait 99066 00:24:38.387 Connection closed with partial response: 00:24:38.387 00:24:38.387 00:24:38.387 13:38:43 -- host/multipath.sh@116 -- # wait 99066 00:24:38.387 13:38:43 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:38.387 [2024-12-15 13:37:45.871089] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:38.387 [2024-12-15 13:37:45.871187] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99066 ] 00:24:38.387 [2024-12-15 13:37:46.007940] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.387 [2024-12-15 13:37:46.070161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:38.387 Running I/O for 90 seconds... 00:24:38.387 [2024-12-15 13:37:56.125648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.387 [2024-12-15 13:37:56.125721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:38.387 [2024-12-15 13:37:56.125777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.387 [2024-12-15 13:37:56.125798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:38.387 [2024-12-15 13:37:56.125822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.387 [2024-12-15 13:37:56.125838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:38.387 [2024-12-15 13:37:56.125860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.387 [2024-12-15 13:37:56.125875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:38.387 [2024-12-15 13:37:56.125896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.387 [2024-12-15 13:37:56.125933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:38.387 [2024-12-15 13:37:56.125953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.387 [2024-12-15 13:37:56.125982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:38.387 [2024-12-15 13:37:56.126016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.387 [2024-12-15 13:37:56.126028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:38.387 [2024-12-15 13:37:56.126046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.387 [2024-12-15 13:37:56.126059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:38.387 [2024-12-15 13:37:56.126077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.387 [2024-12-15 13:37:56.126090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:38.387 [2024-12-15 13:37:56.126108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.387 [2024-12-15 13:37:56.126128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:38.387 [2024-12-15 13:37:56.126146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.387 [2024-12-15 13:37:56.126171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:38.387 [2024-12-15 13:37:56.126192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.387 [2024-12-15 13:37:56.126205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:38.387 [2024-12-15 13:37:56.126223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.387 [2024-12-15 13:37:56.126236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:38.387 [2024-12-15 13:37:56.126257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.387 [2024-12-15 13:37:56.126287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:38.387 [2024-12-15 13:37:56.127279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.387 [2024-12-15 13:37:56.127305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:38.387 [2024-12-15 13:37:56.127328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:44824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.387 [2024-12-15 13:37:56.127342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:38.387 [2024-12-15 13:37:56.127360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.387 [2024-12-15 13:37:56.127373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:38.387 [2024-12-15 13:37:56.127392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.387 [2024-12-15 13:37:56.127404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:38.387 [2024-12-15 13:37:56.127422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.387 [2024-12-15 13:37:56.127435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:38.387 [2024-12-15 13:37:56.127452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.387 [2024-12-15 13:37:56.127465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:38.387 [2024-12-15 13:37:56.127484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.387 [2024-12-15 13:37:56.127497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:38.387 [2024-12-15 13:37:56.127515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.387 [2024-12-15 13:37:56.127528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:38.387 [2024-12-15 13:37:56.127546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.387 [2024-12-15 13:37:56.127559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:38.387 [2024-12-15 13:37:56.127609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.387 [2024-12-15 13:37:56.127642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.387 [2024-12-15 13:37:56.127667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.387 [2024-12-15 13:37:56.127694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.387 [2024-12-15 13:37:56.127719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.388 [2024-12-15 13:37:56.127734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.127756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.388 [2024-12-15 13:37:56.127772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.127793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.388 [2024-12-15 13:37:56.127808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.127830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:44928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.388 [2024-12-15 13:37:56.127844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.127866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.388 [2024-12-15 13:37:56.127880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.127901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.388 [2024-12-15 13:37:56.127923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.127959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.388 [2024-12-15 13:37:56.127974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.128023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.388 [2024-12-15 13:37:56.128050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.128071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.388 [2024-12-15 13:37:56.128084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.128102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.388 [2024-12-15 13:37:56.128115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.128145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.388 [2024-12-15 13:37:56.128161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.128179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.388 [2024-12-15 13:37:56.128193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.128211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.388 [2024-12-15 13:37:56.128223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.128242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.388 [2024-12-15 13:37:56.128255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.128273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.388 [2024-12-15 13:37:56.128286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.128304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:45024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.388 [2024-12-15 13:37:56.128317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.128335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:45032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.388 [2024-12-15 13:37:56.128348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.128861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.388 [2024-12-15 13:37:56.128890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.128916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.388 [2024-12-15 13:37:56.128948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.128984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.388 [2024-12-15 13:37:56.129013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.129047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:45064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.388 [2024-12-15 13:37:56.129060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.129079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.388 [2024-12-15 13:37:56.129093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.129111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.388 [2024-12-15 13:37:56.129134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.129154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.388 [2024-12-15 13:37:56.129168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.129187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.388 [2024-12-15 13:37:56.129199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.129218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.388 [2024-12-15 13:37:56.129230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.129249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.388 [2024-12-15 13:37:56.129262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.129281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.388 [2024-12-15 13:37:56.129294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.129312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:44296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.388 [2024-12-15 13:37:56.129325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.129344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.388 [2024-12-15 13:37:56.129356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.129375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.388 [2024-12-15 13:37:56.129388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.129408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:44328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.388 [2024-12-15 13:37:56.129422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.129440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:44336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.388 [2024-12-15 13:37:56.129453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:38.388 [2024-12-15 13:37:56.129472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:44344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.388 [2024-12-15 13:37:56.129484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.129502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.389 [2024-12-15 13:37:56.129521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.129540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:45120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.389 [2024-12-15 13:37:56.129554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.129610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.389 [2024-12-15 13:37:56.129643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.129666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.389 [2024-12-15 13:37:56.129681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.129702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.389 [2024-12-15 13:37:56.129717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.129738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.389 [2024-12-15 13:37:56.129753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.129774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.389 [2024-12-15 13:37:56.129788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.129810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:44424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.389 [2024-12-15 13:37:56.129825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.129848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:44432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.389 [2024-12-15 13:37:56.129863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.129885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:44440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.389 [2024-12-15 13:37:56.129899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.129920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:44448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.389 [2024-12-15 13:37:56.129935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.129956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.389 [2024-12-15 13:37:56.129985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.130036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.389 [2024-12-15 13:37:56.130049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.130077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:44480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.389 [2024-12-15 13:37:56.130092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.130111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.389 [2024-12-15 13:37:56.130124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.130144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.389 [2024-12-15 13:37:56.130157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.130177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:44544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.389 [2024-12-15 13:37:56.130200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.130219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:44552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.389 [2024-12-15 13:37:56.130233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.130252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:45128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.389 [2024-12-15 13:37:56.130266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.130286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.389 [2024-12-15 13:37:56.130299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.130319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:45144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.389 [2024-12-15 13:37:56.130332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.130352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:45152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.389 [2024-12-15 13:37:56.130365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.130384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.389 [2024-12-15 13:37:56.130398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.130417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:45168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.389 [2024-12-15 13:37:56.130430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.130450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:45176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.389 [2024-12-15 13:37:56.130463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.130489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:45184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.389 [2024-12-15 13:37:56.130503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.130522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.389 [2024-12-15 13:37:56.130541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.130561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.389 [2024-12-15 13:37:56.130574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.130609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:45208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.389 [2024-12-15 13:37:56.130623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.130671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:45216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.389 [2024-12-15 13:37:56.130690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.130711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.389 [2024-12-15 13:37:56.130726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.130746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.389 [2024-12-15 13:37:56.130761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.130782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.389 [2024-12-15 13:37:56.130802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:38.389 [2024-12-15 13:37:56.130823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:45248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.389 [2024-12-15 13:37:56.130837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:37:56.130858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.390 [2024-12-15 13:37:56.130872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:37:56.130893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.390 [2024-12-15 13:37:56.130907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:37:56.130927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:45272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.390 [2024-12-15 13:37:56.130942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:37:56.130962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.390 [2024-12-15 13:37:56.131017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:37:56.131037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.390 [2024-12-15 13:37:56.131051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:37:56.131071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.390 [2024-12-15 13:37:56.131085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:37:56.131105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:44600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.390 [2024-12-15 13:37:56.131119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:37:56.131138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:44608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.390 [2024-12-15 13:37:56.131152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:37:56.131172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:44616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.390 [2024-12-15 13:37:56.131185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:37:56.131204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.390 [2024-12-15 13:37:56.131218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:37:56.131237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:44640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.390 [2024-12-15 13:37:56.131250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:37:56.131269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.390 [2024-12-15 13:37:56.131283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:37:56.131303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.390 [2024-12-15 13:37:56.131316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:38:02.669614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.390 [2024-12-15 13:38:02.669677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:38:02.669732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.390 [2024-12-15 13:38:02.669755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:38:02.669780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.390 [2024-12-15 13:38:02.669811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:38:02.669835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.390 [2024-12-15 13:38:02.669849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:38:02.669871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.390 [2024-12-15 13:38:02.669885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:38:02.669906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.390 [2024-12-15 13:38:02.669943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:38:02.669978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.390 [2024-12-15 13:38:02.670007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:38:02.670041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.390 [2024-12-15 13:38:02.670053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:38:02.670071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.390 [2024-12-15 13:38:02.670083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:38:02.670101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.390 [2024-12-15 13:38:02.670114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:38:02.670132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.390 [2024-12-15 13:38:02.670144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:38:02.670162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.390 [2024-12-15 13:38:02.670175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:38:02.670193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.390 [2024-12-15 13:38:02.670205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:38:02.670228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.390 [2024-12-15 13:38:02.670242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:38:02.670260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.390 [2024-12-15 13:38:02.670272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:38:02.670299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.390 [2024-12-15 13:38:02.670313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:38:02.670331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.390 [2024-12-15 13:38:02.670344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:38:02.670364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.390 [2024-12-15 13:38:02.670377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:38:02.670395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.390 [2024-12-15 13:38:02.670408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:38:02.670426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.390 [2024-12-15 13:38:02.670438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:38.390 [2024-12-15 13:38:02.670457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.390 [2024-12-15 13:38:02.670470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:38.391 [2024-12-15 13:38:02.670488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.391 [2024-12-15 13:38:02.670501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:38.391 [2024-12-15 13:38:02.671155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.391 [2024-12-15 13:38:02.671181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:38.391 [2024-12-15 13:38:02.671207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.391 [2024-12-15 13:38:02.671223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:38.391 [2024-12-15 13:38:02.671245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.391 [2024-12-15 13:38:02.671259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.391 [2024-12-15 13:38:02.671281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.391 [2024-12-15 13:38:02.671295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.391 [2024-12-15 13:38:02.671317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.391 [2024-12-15 13:38:02.671330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:38.391 [2024-12-15 13:38:02.671364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.391 [2024-12-15 13:38:02.671380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:38.391 [2024-12-15 13:38:02.671402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.391 [2024-12-15 13:38:02.671415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:38.391 [2024-12-15 13:38:02.671437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.391 [2024-12-15 13:38:02.671451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:38.391 [2024-12-15 13:38:02.671473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.391 [2024-12-15 13:38:02.671501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:38.391 [2024-12-15 13:38:02.671522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.391 [2024-12-15 13:38:02.671536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:38.391 [2024-12-15 13:38:02.671557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.391 [2024-12-15 13:38:02.671570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:38.391 [2024-12-15 13:38:02.671592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.391 [2024-12-15 13:38:02.671605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:38.391 [2024-12-15 13:38:02.671626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.391 [2024-12-15 13:38:02.671652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:38.391 [2024-12-15 13:38:02.671677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.391 [2024-12-15 13:38:02.671691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:38.391 [2024-12-15 13:38:02.671712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.391 [2024-12-15 13:38:02.671725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:38.391 [2024-12-15 13:38:02.671746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.391 [2024-12-15 13:38:02.671760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:38.391 [2024-12-15 13:38:02.671781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.391 [2024-12-15 13:38:02.671795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:38.391 [2024-12-15 13:38:02.671836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.391 [2024-12-15 13:38:02.671851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:38.391 [2024-12-15 13:38:02.671873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.391 [2024-12-15 13:38:02.671887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:38.391 [2024-12-15 13:38:02.671908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.391 [2024-12-15 13:38:02.671921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:38.391 [2024-12-15 13:38:02.671942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.391 [2024-12-15 13:38:02.671955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:38.391 [2024-12-15 13:38:02.671977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.391 [2024-12-15 13:38:02.671991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:38.391 [2024-12-15 13:38:02.672012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.391 [2024-12-15 13:38:02.672025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:38.391 [2024-12-15 13:38:02.672046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.391 [2024-12-15 13:38:02.672061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:38.391 [2024-12-15 13:38:02.672083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.391 [2024-12-15 13:38:02.672096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:38.391 [2024-12-15 13:38:02.672117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.391 [2024-12-15 13:38:02.672130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:38.391 [2024-12-15 13:38:02.672152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.391 [2024-12-15 13:38:02.672165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:38.391 [2024-12-15 13:38:02.672187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.391 [2024-12-15 13:38:02.672200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:38.391 [2024-12-15 13:38:02.672221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.391 [2024-12-15 13:38:02.672234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:38.391 [2024-12-15 13:38:02.672255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.391 [2024-12-15 13:38:02.672285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:38.391 [2024-12-15 13:38:02.672308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.391 [2024-12-15 13:38:02.672321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:38.392 [2024-12-15 13:38:02.672343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.392 [2024-12-15 13:38:02.672356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:38.392 [2024-12-15 13:38:02.672377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.392 [2024-12-15 13:38:02.672390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:38.392 [2024-12-15 13:38:02.672411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.392 [2024-12-15 13:38:02.672425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:38.392 [2024-12-15 13:38:02.672446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.392 [2024-12-15 13:38:02.672459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.392 [2024-12-15 13:38:02.672481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.392 [2024-12-15 13:38:02.672493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.392 [2024-12-15 13:38:02.672515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.392 [2024-12-15 13:38:02.672537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:38.392 [2024-12-15 13:38:02.672558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.392 [2024-12-15 13:38:02.672572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:38.392 [2024-12-15 13:38:02.672697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.392 [2024-12-15 13:38:02.672719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:38.392 [2024-12-15 13:38:02.672747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.392 [2024-12-15 13:38:02.672761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:38.392 [2024-12-15 13:38:02.672785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.392 [2024-12-15 13:38:02.672799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:38.392 [2024-12-15 13:38:02.672822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.392 [2024-12-15 13:38:02.672845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:38.392 [2024-12-15 13:38:02.672871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.392 [2024-12-15 13:38:02.672885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:38.392 [2024-12-15 13:38:02.672908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.392 [2024-12-15 13:38:02.672922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:38.392 [2024-12-15 13:38:02.672945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.392 [2024-12-15 13:38:02.672958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:38.392 [2024-12-15 13:38:02.672982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.392 [2024-12-15 13:38:02.672995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:38.392 [2024-12-15 13:38:02.673019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.392 [2024-12-15 13:38:02.673031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:38.392 [2024-12-15 13:38:02.673055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.392 [2024-12-15 13:38:02.673068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:38.392 [2024-12-15 13:38:02.673092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.392 [2024-12-15 13:38:02.673105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:38.392 [2024-12-15 13:38:02.673129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.392 [2024-12-15 13:38:02.673142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:38.392 [2024-12-15 13:38:02.673166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.392 [2024-12-15 13:38:02.673179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:38.392 [2024-12-15 13:38:02.673203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.392 [2024-12-15 13:38:02.673216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:38.392 [2024-12-15 13:38:02.673240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.392 [2024-12-15 13:38:02.673253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:38.392 [2024-12-15 13:38:02.673276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.392 [2024-12-15 13:38:02.673290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:38.392 [2024-12-15 13:38:02.673319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.392 [2024-12-15 13:38:02.673334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:38.392 [2024-12-15 13:38:02.673358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.392 [2024-12-15 13:38:02.673371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:38.392 [2024-12-15 13:38:02.673394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.392 [2024-12-15 13:38:02.673407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:38.392 [2024-12-15 13:38:02.673440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.392 [2024-12-15 13:38:02.673459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:38.392 [2024-12-15 13:38:02.673483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.392 [2024-12-15 13:38:02.673496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:02.673520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.393 [2024-12-15 13:38:02.673533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:02.673557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.393 [2024-12-15 13:38:02.673597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:02.673635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.393 [2024-12-15 13:38:02.673650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:02.673675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.393 [2024-12-15 13:38:02.673689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:02.673713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.393 [2024-12-15 13:38:02.673726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:02.673751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.393 [2024-12-15 13:38:02.673764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:02.673789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.393 [2024-12-15 13:38:02.673802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:02.673834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.393 [2024-12-15 13:38:02.673861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:02.673903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.393 [2024-12-15 13:38:02.673917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:02.673957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.393 [2024-12-15 13:38:02.673970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:02.673994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.393 [2024-12-15 13:38:02.674008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:02.674050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.393 [2024-12-15 13:38:02.674064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:02.674088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.393 [2024-12-15 13:38:02.674102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:02.674132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.393 [2024-12-15 13:38:02.674146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:02.674176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.393 [2024-12-15 13:38:02.674196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:02.674221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.393 [2024-12-15 13:38:02.674234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:02.674259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.393 [2024-12-15 13:38:02.674273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:02.674298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.393 [2024-12-15 13:38:02.674311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:02.674351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.393 [2024-12-15 13:38:02.674364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:02.674393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.393 [2024-12-15 13:38:02.674412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:02.674438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.393 [2024-12-15 13:38:02.674451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:02.674476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.393 [2024-12-15 13:38:02.674489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:02.674513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.393 [2024-12-15 13:38:02.674527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:02.674551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.393 [2024-12-15 13:38:02.674564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:02.674589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.393 [2024-12-15 13:38:02.674602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:09.636273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.393 [2024-12-15 13:38:09.636339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:09.636413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:60328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.393 [2024-12-15 13:38:09.636432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:09.636452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.393 [2024-12-15 13:38:09.636466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:09.636484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.393 [2024-12-15 13:38:09.636500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:09.636518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:59664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.393 [2024-12-15 13:38:09.636531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:09.636804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:59680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.393 [2024-12-15 13:38:09.636829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:09.636852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.393 [2024-12-15 13:38:09.636904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:09.636940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:59696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.393 [2024-12-15 13:38:09.636954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:38.393 [2024-12-15 13:38:09.636972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:59712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.394 [2024-12-15 13:38:09.636985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.637003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:59720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.394 [2024-12-15 13:38:09.637015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.637034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.394 [2024-12-15 13:38:09.637047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.637065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:60344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.394 [2024-12-15 13:38:09.637078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.637096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.394 [2024-12-15 13:38:09.637109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.637127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:60360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.394 [2024-12-15 13:38:09.637140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.637158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:59728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.394 [2024-12-15 13:38:09.637171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.637189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.394 [2024-12-15 13:38:09.637202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.637220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:59760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.394 [2024-12-15 13:38:09.637233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.637254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.394 [2024-12-15 13:38:09.637289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.637309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.394 [2024-12-15 13:38:09.637322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.637350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.394 [2024-12-15 13:38:09.637364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.637386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:59824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.394 [2024-12-15 13:38:09.637415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.637436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.394 [2024-12-15 13:38:09.637450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.637470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.394 [2024-12-15 13:38:09.637483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.637518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.394 [2024-12-15 13:38:09.637537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.637558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.394 [2024-12-15 13:38:09.637598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.637639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.394 [2024-12-15 13:38:09.637655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.637677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.394 [2024-12-15 13:38:09.637691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.637727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.394 [2024-12-15 13:38:09.637742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.637763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.394 [2024-12-15 13:38:09.637777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.637798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.394 [2024-12-15 13:38:09.637813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.637834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:60432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.394 [2024-12-15 13:38:09.637848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.637894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.394 [2024-12-15 13:38:09.637909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.637933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.394 [2024-12-15 13:38:09.637947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.637984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.394 [2024-12-15 13:38:09.637998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.638018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:60464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.394 [2024-12-15 13:38:09.638031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.638051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:60472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.394 [2024-12-15 13:38:09.638065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.638084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:60480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.394 [2024-12-15 13:38:09.638099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.638119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.394 [2024-12-15 13:38:09.638132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.638152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:60496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.394 [2024-12-15 13:38:09.638166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.638185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.394 [2024-12-15 13:38:09.638199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.638218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:60512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.394 [2024-12-15 13:38:09.638232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:38.394 [2024-12-15 13:38:09.638252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:60520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.394 [2024-12-15 13:38:09.638265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.638285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.395 [2024-12-15 13:38:09.638303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.638322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:60536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.395 [2024-12-15 13:38:09.638342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.638363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.395 [2024-12-15 13:38:09.638377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.638397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.395 [2024-12-15 13:38:09.638410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.638429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.395 [2024-12-15 13:38:09.638444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.638464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.395 [2024-12-15 13:38:09.638479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.638499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.395 [2024-12-15 13:38:09.638512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.638532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.395 [2024-12-15 13:38:09.638546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.638565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:59904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.395 [2024-12-15 13:38:09.638579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.638598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.395 [2024-12-15 13:38:09.638611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.638660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.395 [2024-12-15 13:38:09.638685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.638707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.395 [2024-12-15 13:38:09.638721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.638741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:60016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.395 [2024-12-15 13:38:09.638755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.638779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:60024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.395 [2024-12-15 13:38:09.638801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.638822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.395 [2024-12-15 13:38:09.638837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.638858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:60040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.395 [2024-12-15 13:38:09.638881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.638901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.395 [2024-12-15 13:38:09.638916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.638952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.395 [2024-12-15 13:38:09.638966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.638986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:60080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.395 [2024-12-15 13:38:09.639000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.639020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.395 [2024-12-15 13:38:09.639034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.639054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:60560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.395 [2024-12-15 13:38:09.639068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.639235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.395 [2024-12-15 13:38:09.639259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.639286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.395 [2024-12-15 13:38:09.639301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.639325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.395 [2024-12-15 13:38:09.639339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.639363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.395 [2024-12-15 13:38:09.639376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.639400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.395 [2024-12-15 13:38:09.639424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.639450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.395 [2024-12-15 13:38:09.639465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.639488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.395 [2024-12-15 13:38:09.639504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.639528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.395 [2024-12-15 13:38:09.639548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.639572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.395 [2024-12-15 13:38:09.639586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.639623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:60640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.395 [2024-12-15 13:38:09.639640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.639664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.395 [2024-12-15 13:38:09.639678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.639702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:60656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.395 [2024-12-15 13:38:09.639722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:38.395 [2024-12-15 13:38:09.639746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.395 [2024-12-15 13:38:09.639760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.639784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.396 [2024-12-15 13:38:09.639798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.639821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:60680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.396 [2024-12-15 13:38:09.639835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.639858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.396 [2024-12-15 13:38:09.639872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.639896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:60696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.396 [2024-12-15 13:38:09.639909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.639940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:60088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.396 [2024-12-15 13:38:09.639955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.639979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:60096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.396 [2024-12-15 13:38:09.639992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.640016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:60112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.396 [2024-12-15 13:38:09.640030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.640054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:60120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.396 [2024-12-15 13:38:09.640068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.640101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:60136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.396 [2024-12-15 13:38:09.640115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.640139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:60144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.396 [2024-12-15 13:38:09.640153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.640176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:60152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.396 [2024-12-15 13:38:09.640198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.640222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:60160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.396 [2024-12-15 13:38:09.640236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.640260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.396 [2024-12-15 13:38:09.640273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.640297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.396 [2024-12-15 13:38:09.640316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.640340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.396 [2024-12-15 13:38:09.640354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.640377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.396 [2024-12-15 13:38:09.640391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.640423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.396 [2024-12-15 13:38:09.640438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.640462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.396 [2024-12-15 13:38:09.640475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.640499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:60224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.396 [2024-12-15 13:38:09.640513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.640537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:60240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.396 [2024-12-15 13:38:09.640551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.640574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:60264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.396 [2024-12-15 13:38:09.640602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.640628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:60272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.396 [2024-12-15 13:38:09.640643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.640675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:60296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.396 [2024-12-15 13:38:09.640688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.640713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.396 [2024-12-15 13:38:09.640726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.640750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.396 [2024-12-15 13:38:09.640763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.640787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.396 [2024-12-15 13:38:09.640802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.640825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.396 [2024-12-15 13:38:09.640843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.640867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.396 [2024-12-15 13:38:09.640880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.640904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.396 [2024-12-15 13:38:09.640925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.640949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.396 [2024-12-15 13:38:09.640968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.640993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.396 [2024-12-15 13:38:09.641007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.641031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.396 [2024-12-15 13:38:09.641044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:38.396 [2024-12-15 13:38:09.641068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.396 [2024-12-15 13:38:09.641081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:38.397 [2024-12-15 13:38:09.641105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.397 [2024-12-15 13:38:09.641118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:38.397 [2024-12-15 13:38:09.641142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:60816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.397 [2024-12-15 13:38:09.641155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:38.397 [2024-12-15 13:38:09.641179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.397 [2024-12-15 13:38:09.641192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:38.397 [2024-12-15 13:38:09.641216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.397 [2024-12-15 13:38:09.641230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:38.397 [2024-12-15 13:38:09.641253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.397 [2024-12-15 13:38:09.641267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:38.397 [2024-12-15 13:38:09.641290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.397 [2024-12-15 13:38:09.641304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:38.397 [2024-12-15 13:38:09.641338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.397 [2024-12-15 13:38:09.641352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:38.397 [2024-12-15 13:38:09.641375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:60864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.397 [2024-12-15 13:38:09.641395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:38.397 [2024-12-15 13:38:09.641420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:60872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.397 [2024-12-15 13:38:09.641434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:38.397 [2024-12-15 13:38:09.641458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.397 [2024-12-15 13:38:09.641472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:38.397 [2024-12-15 13:38:09.641496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.397 [2024-12-15 13:38:09.641510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:38.397 [2024-12-15 13:38:09.641533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.397 [2024-12-15 13:38:09.641547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:38.397 [2024-12-15 13:38:09.641619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.397 [2024-12-15 13:38:09.641642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:38.397 [2024-12-15 13:38:09.641668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.397 [2024-12-15 13:38:09.641682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:38.397 [2024-12-15 13:38:09.641706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.397 [2024-12-15 13:38:09.641721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:38.397 [2024-12-15 13:38:09.641745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.397 [2024-12-15 13:38:09.641759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:38.397 [2024-12-15 13:38:09.641784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:60936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.397 [2024-12-15 13:38:09.641798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:38.397 [2024-12-15 13:38:09.641822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:60944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.397 [2024-12-15 13:38:09.641836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:38.397 [2024-12-15 13:38:09.641861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.397 [2024-12-15 13:38:09.641891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:38.397 [2024-12-15 13:38:22.951882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.397 [2024-12-15 13:38:22.951935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.397 [2024-12-15 13:38:22.952018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:130800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.397 [2024-12-15 13:38:22.952034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.397 [2024-12-15 13:38:22.952049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.397 [2024-12-15 13:38:22.952061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.397 [2024-12-15 13:38:22.952075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.397 [2024-12-15 13:38:22.952088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.397 [2024-12-15 13:38:22.952102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:130824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.397 [2024-12-15 13:38:22.952129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.397 [2024-12-15 13:38:22.952142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:130832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.397 [2024-12-15 13:38:22.952154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.397 [2024-12-15 13:38:22.952173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:130848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.397 [2024-12-15 13:38:22.952185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.397 [2024-12-15 13:38:22.952206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:130872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.397 [2024-12-15 13:38:22.952227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.397 [2024-12-15 13:38:22.952247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:130880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.397 [2024-12-15 13:38:22.952274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.397 [2024-12-15 13:38:22.952288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:130904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.397 [2024-12-15 13:38:22.952300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.397 [2024-12-15 13:38:22.952313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.398 [2024-12-15 13:38:22.952324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.952337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:130928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.398 [2024-12-15 13:38:22.952349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.952362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.398 [2024-12-15 13:38:22.952374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.952388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.398 [2024-12-15 13:38:22.952410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.952424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:130248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.398 [2024-12-15 13:38:22.952436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.952449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.398 [2024-12-15 13:38:22.952461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.952474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:130264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.398 [2024-12-15 13:38:22.952486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.952500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.398 [2024-12-15 13:38:22.952512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.952526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:130304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.398 [2024-12-15 13:38:22.952539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.952552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:130328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.398 [2024-12-15 13:38:22.952564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.952577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.398 [2024-12-15 13:38:22.952589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.952615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.398 [2024-12-15 13:38:22.952631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.952644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.398 [2024-12-15 13:38:22.952657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.952670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:130392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.398 [2024-12-15 13:38:22.952682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.952695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.398 [2024-12-15 13:38:22.952707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.952720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:130416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.398 [2024-12-15 13:38:22.952731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.952752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:130424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.398 [2024-12-15 13:38:22.952764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.952777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:130448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.398 [2024-12-15 13:38:22.952789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.952802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:130456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.398 [2024-12-15 13:38:22.952814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.952827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.398 [2024-12-15 13:38:22.952839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.952852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.398 [2024-12-15 13:38:22.952865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.952878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.398 [2024-12-15 13:38:22.952892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.952905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.398 [2024-12-15 13:38:22.952917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.952930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.398 [2024-12-15 13:38:22.952942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.952955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.398 [2024-12-15 13:38:22.952966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.952979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.398 [2024-12-15 13:38:22.952991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.953004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:131000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.398 [2024-12-15 13:38:22.953015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.953028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:131008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.398 [2024-12-15 13:38:22.953040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.953053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.398 [2024-12-15 13:38:22.953071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.953085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:131024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.398 [2024-12-15 13:38:22.953097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.953110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.398 [2024-12-15 13:38:22.953122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.953135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:131040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.398 [2024-12-15 13:38:22.953147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.953160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.398 [2024-12-15 13:38:22.953172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.953185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.398 [2024-12-15 13:38:22.953197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.953211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.398 [2024-12-15 13:38:22.953223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.953252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.398 [2024-12-15 13:38:22.953265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.398 [2024-12-15 13:38:22.953280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.398 [2024-12-15 13:38:22.953293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.953307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.399 [2024-12-15 13:38:22.953319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.953332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.399 [2024-12-15 13:38:22.953361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.953375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:32 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.399 [2024-12-15 13:38:22.953387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.953401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.399 [2024-12-15 13:38:22.953430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.953450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.399 [2024-12-15 13:38:22.953463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.953477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:56 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.399 [2024-12-15 13:38:22.953491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.953505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.399 [2024-12-15 13:38:22.953518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.953532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.399 [2024-12-15 13:38:22.953544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.953565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:130488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.399 [2024-12-15 13:38:22.953619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.953636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:130504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.399 [2024-12-15 13:38:22.953665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.953679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:130528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.399 [2024-12-15 13:38:22.953692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.953706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.399 [2024-12-15 13:38:22.953719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.953733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:130560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.399 [2024-12-15 13:38:22.953752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.953768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:130568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.399 [2024-12-15 13:38:22.953781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.953795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.399 [2024-12-15 13:38:22.953807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.953822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.399 [2024-12-15 13:38:22.953835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.953849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.399 [2024-12-15 13:38:22.953861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.953887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.399 [2024-12-15 13:38:22.953900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.953930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.399 [2024-12-15 13:38:22.953942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.953956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.399 [2024-12-15 13:38:22.953968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.953982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.399 [2024-12-15 13:38:22.953995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.954009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.399 [2024-12-15 13:38:22.954021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.954035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.399 [2024-12-15 13:38:22.954047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.954061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.399 [2024-12-15 13:38:22.954073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.954092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.399 [2024-12-15 13:38:22.954105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.954119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.399 [2024-12-15 13:38:22.954131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.954145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.399 [2024-12-15 13:38:22.954157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.954171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.399 [2024-12-15 13:38:22.954183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.954197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.399 [2024-12-15 13:38:22.954214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.954228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.399 [2024-12-15 13:38:22.954247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.954261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.399 [2024-12-15 13:38:22.954274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.954288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.399 [2024-12-15 13:38:22.954300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.954317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.399 [2024-12-15 13:38:22.954339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.954353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.399 [2024-12-15 13:38:22.954366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.954379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.399 [2024-12-15 13:38:22.954398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.954412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.399 [2024-12-15 13:38:22.954424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.954437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.399 [2024-12-15 13:38:22.954449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.399 [2024-12-15 13:38:22.954463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.399 [2024-12-15 13:38:22.954475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.954489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.400 [2024-12-15 13:38:22.954502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.954515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.400 [2024-12-15 13:38:22.954527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.954543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.400 [2024-12-15 13:38:22.954556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.954570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.400 [2024-12-15 13:38:22.954583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.954602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.400 [2024-12-15 13:38:22.954631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.954648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.400 [2024-12-15 13:38:22.954661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.954681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.400 [2024-12-15 13:38:22.954707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.954721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.400 [2024-12-15 13:38:22.954733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.954747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.400 [2024-12-15 13:38:22.954760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.954773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.400 [2024-12-15 13:38:22.954786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.954800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.400 [2024-12-15 13:38:22.954812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.954826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.400 [2024-12-15 13:38:22.954839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.954852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.400 [2024-12-15 13:38:22.954866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.954879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.400 [2024-12-15 13:38:22.954892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.954905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.400 [2024-12-15 13:38:22.954917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.954931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.400 [2024-12-15 13:38:22.954944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.954958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.400 [2024-12-15 13:38:22.954970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.954990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.400 [2024-12-15 13:38:22.955003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.955017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.400 [2024-12-15 13:38:22.955029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.955043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.400 [2024-12-15 13:38:22.955056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.955070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.400 [2024-12-15 13:38:22.955083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.955096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.400 [2024-12-15 13:38:22.955109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.955123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.400 [2024-12-15 13:38:22.955141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.955155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.400 [2024-12-15 13:38:22.955168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.955182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.400 [2024-12-15 13:38:22.955194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.955208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.400 [2024-12-15 13:38:22.955221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.955234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.400 [2024-12-15 13:38:22.955247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.955261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.400 [2024-12-15 13:38:22.955273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.955287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.400 [2024-12-15 13:38:22.955299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.955313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.400 [2024-12-15 13:38:22.955331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.955345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.400 [2024-12-15 13:38:22.955358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.955372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.400 [2024-12-15 13:38:22.955384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.955398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.400 [2024-12-15 13:38:22.955410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.955424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.400 [2024-12-15 13:38:22.955436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.955450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.400 [2024-12-15 13:38:22.955462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.955476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.400 [2024-12-15 13:38:22.955489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.955503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.400 [2024-12-15 13:38:22.955515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.400 [2024-12-15 13:38:22.955529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:130840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.401 [2024-12-15 13:38:22.955542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.401 [2024-12-15 13:38:22.955556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:130856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.401 [2024-12-15 13:38:22.955572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.401 [2024-12-15 13:38:22.955617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:130864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.401 [2024-12-15 13:38:22.955634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.401 [2024-12-15 13:38:22.955649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:130888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.401 [2024-12-15 13:38:22.955668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.401 [2024-12-15 13:38:22.955682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:130896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.401 [2024-12-15 13:38:22.955695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.401 [2024-12-15 13:38:22.955714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e2060 is same with the state(5) to be set 00:24:38.401 [2024-12-15 13:38:22.955733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:38.401 [2024-12-15 13:38:22.955743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:38.401 [2024-12-15 13:38:22.955753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130912 len:8 PRP1 0x0 PRP2 0x0 00:24:38.401 [2024-12-15 13:38:22.955765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.401 [2024-12-15 13:38:22.955831] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16e2060 was disconnected and freed. reset controller. 00:24:38.401 [2024-12-15 13:38:22.957088] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.401 [2024-12-15 13:38:22.957171] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f3a00 (9): Bad file descriptor 00:24:38.401 [2024-12-15 13:38:22.957288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-12-15 13:38:22.957340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.401 [2024-12-15 13:38:22.957361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f3a00 with addr=10.0.0.2, port=4421 00:24:38.401 [2024-12-15 13:38:22.957375] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3a00 is same with the state(5) to be set 00:24:38.401 [2024-12-15 13:38:22.957397] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f3a00 (9): Bad file descriptor 00:24:38.401 [2024-12-15 13:38:22.957418] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.401 [2024-12-15 13:38:22.957432] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.401 [2024-12-15 13:38:22.957446] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.401 [2024-12-15 13:38:22.957468] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.401 [2024-12-15 13:38:22.957482] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.401 [2024-12-15 13:38:33.011085] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:38.401 Received shutdown signal, test time was about 55.239356 seconds 00:24:38.401 00:24:38.401 Latency(us) 00:24:38.401 [2024-12-15T13:38:44.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.401 [2024-12-15T13:38:44.091Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:38.401 Verification LBA range: start 0x0 length 0x4000 00:24:38.401 Nvme0n1 : 55.24 12411.49 48.48 0.00 0.00 10296.35 301.61 7015926.69 00:24:38.401 [2024-12-15T13:38:44.091Z] =================================================================================================================== 00:24:38.401 [2024-12-15T13:38:44.091Z] Total : 12411.49 48.48 0.00 0.00 10296.35 301.61 7015926.69 00:24:38.401 13:38:43 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:38.401 13:38:43 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:24:38.401 13:38:43 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:38.401 13:38:43 -- host/multipath.sh@125 -- # nvmftestfini 00:24:38.401 13:38:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:38.401 13:38:43 -- nvmf/common.sh@116 -- # sync 00:24:38.401 13:38:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:38.401 13:38:43 -- nvmf/common.sh@119 -- # set +e 00:24:38.401 13:38:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:38.401 13:38:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:38.401 rmmod nvme_tcp 00:24:38.401 rmmod nvme_fabrics 00:24:38.401 rmmod nvme_keyring 00:24:38.401 13:38:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:38.401 13:38:43 -- nvmf/common.sh@123 -- # set -e 00:24:38.401 13:38:43 -- nvmf/common.sh@124 -- # return 0 00:24:38.401 13:38:43 -- nvmf/common.sh@477 -- # '[' -n 98958 ']' 00:24:38.401 13:38:43 -- nvmf/common.sh@478 -- # killprocess 98958 00:24:38.401 13:38:43 -- common/autotest_common.sh@936 -- # '[' -z 98958 ']' 00:24:38.401 13:38:43 -- common/autotest_common.sh@940 -- # kill -0 98958 00:24:38.401 13:38:43 -- common/autotest_common.sh@941 -- # uname 00:24:38.401 13:38:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:38.401 13:38:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98958 00:24:38.401 killing process with pid 98958 00:24:38.401 13:38:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:38.401 13:38:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:38.401 13:38:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98958' 00:24:38.401 13:38:43 -- common/autotest_common.sh@955 -- # kill 98958 00:24:38.401 13:38:43 -- common/autotest_common.sh@960 -- # wait 98958 00:24:38.401 13:38:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:38.401 13:38:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:38.401 13:38:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:38.401 13:38:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:38.401 13:38:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:38.401 13:38:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.401 13:38:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:38.401 13:38:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.401 13:38:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:38.401 00:24:38.401 real 1m1.590s 00:24:38.401 user 2m53.737s 00:24:38.401 sys 0m14.062s 00:24:38.401 13:38:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:38.401 13:38:44 -- common/autotest_common.sh@10 -- # set +x 00:24:38.401 ************************************ 00:24:38.401 END TEST nvmf_multipath 00:24:38.401 ************************************ 00:24:38.401 13:38:44 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:38.401 13:38:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:38.401 13:38:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:38.401 13:38:44 -- common/autotest_common.sh@10 -- # set +x 00:24:38.687 ************************************ 00:24:38.687 START TEST nvmf_timeout 00:24:38.687 ************************************ 00:24:38.687 13:38:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:38.687 * Looking for test storage... 00:24:38.687 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:38.687 13:38:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:38.687 13:38:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:38.687 13:38:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:38.687 13:38:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:38.687 13:38:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:38.687 13:38:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:38.687 13:38:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:38.687 13:38:44 -- scripts/common.sh@335 -- # IFS=.-: 00:24:38.687 13:38:44 -- scripts/common.sh@335 -- # read -ra ver1 00:24:38.687 13:38:44 -- scripts/common.sh@336 -- # IFS=.-: 00:24:38.687 13:38:44 -- scripts/common.sh@336 -- # read -ra ver2 00:24:38.687 13:38:44 -- scripts/common.sh@337 -- # local 'op=<' 00:24:38.687 13:38:44 -- scripts/common.sh@339 -- # ver1_l=2 00:24:38.687 13:38:44 -- scripts/common.sh@340 -- # ver2_l=1 00:24:38.687 13:38:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:38.687 13:38:44 -- scripts/common.sh@343 -- # case "$op" in 00:24:38.687 13:38:44 -- scripts/common.sh@344 -- # : 1 00:24:38.687 13:38:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:38.687 13:38:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:38.687 13:38:44 -- scripts/common.sh@364 -- # decimal 1 00:24:38.687 13:38:44 -- scripts/common.sh@352 -- # local d=1 00:24:38.687 13:38:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:38.687 13:38:44 -- scripts/common.sh@354 -- # echo 1 00:24:38.687 13:38:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:38.687 13:38:44 -- scripts/common.sh@365 -- # decimal 2 00:24:38.687 13:38:44 -- scripts/common.sh@352 -- # local d=2 00:24:38.687 13:38:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:38.687 13:38:44 -- scripts/common.sh@354 -- # echo 2 00:24:38.687 13:38:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:38.687 13:38:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:38.687 13:38:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:38.687 13:38:44 -- scripts/common.sh@367 -- # return 0 00:24:38.687 13:38:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:38.687 13:38:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:38.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.687 --rc genhtml_branch_coverage=1 00:24:38.687 --rc genhtml_function_coverage=1 00:24:38.687 --rc genhtml_legend=1 00:24:38.687 --rc geninfo_all_blocks=1 00:24:38.687 --rc geninfo_unexecuted_blocks=1 00:24:38.687 00:24:38.687 ' 00:24:38.687 13:38:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:38.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.688 --rc genhtml_branch_coverage=1 00:24:38.688 --rc genhtml_function_coverage=1 00:24:38.688 --rc genhtml_legend=1 00:24:38.688 --rc geninfo_all_blocks=1 00:24:38.688 --rc geninfo_unexecuted_blocks=1 00:24:38.688 00:24:38.688 ' 00:24:38.688 13:38:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:38.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.688 --rc genhtml_branch_coverage=1 00:24:38.688 --rc genhtml_function_coverage=1 00:24:38.688 --rc genhtml_legend=1 00:24:38.688 --rc geninfo_all_blocks=1 00:24:38.688 --rc geninfo_unexecuted_blocks=1 00:24:38.688 00:24:38.688 ' 00:24:38.688 13:38:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:38.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.688 --rc genhtml_branch_coverage=1 00:24:38.688 --rc genhtml_function_coverage=1 00:24:38.688 --rc genhtml_legend=1 00:24:38.688 --rc geninfo_all_blocks=1 00:24:38.688 --rc geninfo_unexecuted_blocks=1 00:24:38.688 00:24:38.688 ' 00:24:38.688 13:38:44 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:38.688 13:38:44 -- nvmf/common.sh@7 -- # uname -s 00:24:38.688 13:38:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:38.688 13:38:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:38.688 13:38:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:38.688 13:38:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:38.688 13:38:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:38.688 13:38:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:38.688 13:38:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:38.688 13:38:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:38.688 13:38:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:38.688 13:38:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:38.688 13:38:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:24:38.688 13:38:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:24:38.688 13:38:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:38.688 13:38:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:38.688 13:38:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:38.688 13:38:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:38.688 13:38:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:38.688 13:38:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:38.688 13:38:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:38.688 13:38:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.688 13:38:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.688 13:38:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.688 13:38:44 -- paths/export.sh@5 -- # export PATH 00:24:38.688 13:38:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.688 13:38:44 -- nvmf/common.sh@46 -- # : 0 00:24:38.688 13:38:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:38.688 13:38:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:38.688 13:38:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:38.688 13:38:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:38.688 13:38:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:38.688 13:38:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:38.688 13:38:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:38.688 13:38:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:38.688 13:38:44 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:38.688 13:38:44 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:38.688 13:38:44 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:38.688 13:38:44 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:38.688 13:38:44 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:38.688 13:38:44 -- host/timeout.sh@19 -- # nvmftestinit 00:24:38.688 13:38:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:38.688 13:38:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:38.688 13:38:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:38.688 13:38:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:38.688 13:38:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:38.688 13:38:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.688 13:38:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:38.688 13:38:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.688 13:38:44 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:38.688 13:38:44 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:38.688 13:38:44 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:38.688 13:38:44 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:38.688 13:38:44 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:38.688 13:38:44 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:38.688 13:38:44 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.688 13:38:44 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.688 13:38:44 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:38.688 13:38:44 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:38.688 13:38:44 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:38.688 13:38:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:38.688 13:38:44 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:38.688 13:38:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.688 13:38:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:38.688 13:38:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:38.688 13:38:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:38.688 13:38:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:38.688 13:38:44 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:38.688 13:38:44 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:38.688 Cannot find device "nvmf_tgt_br" 00:24:38.688 13:38:44 -- nvmf/common.sh@154 -- # true 00:24:38.688 13:38:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:38.688 Cannot find device "nvmf_tgt_br2" 00:24:38.688 13:38:44 -- nvmf/common.sh@155 -- # true 00:24:38.688 13:38:44 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:38.688 13:38:44 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:38.688 Cannot find device "nvmf_tgt_br" 00:24:38.688 13:38:44 -- nvmf/common.sh@157 -- # true 00:24:38.688 13:38:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:38.688 Cannot find device "nvmf_tgt_br2" 00:24:38.688 13:38:44 -- nvmf/common.sh@158 -- # true 00:24:38.688 13:38:44 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:38.688 13:38:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:38.950 13:38:44 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:38.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:38.950 13:38:44 -- nvmf/common.sh@161 -- # true 00:24:38.950 13:38:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:38.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:38.950 13:38:44 -- nvmf/common.sh@162 -- # true 00:24:38.950 13:38:44 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:38.950 13:38:44 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:38.950 13:38:44 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:38.950 13:38:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:38.950 13:38:44 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:38.950 13:38:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:38.950 13:38:44 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:38.950 13:38:44 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:38.950 13:38:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:38.950 13:38:44 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:38.950 13:38:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:38.950 13:38:44 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:38.950 13:38:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:38.950 13:38:44 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:38.950 13:38:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:38.950 13:38:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:38.950 13:38:44 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:38.950 13:38:44 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:38.950 13:38:44 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:38.950 13:38:44 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:38.951 13:38:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:38.951 13:38:44 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:38.951 13:38:44 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:38.951 13:38:44 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:38.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:24:38.951 00:24:38.951 --- 10.0.0.2 ping statistics --- 00:24:38.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.951 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:24:38.951 13:38:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:38.951 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:38.951 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:24:38.951 00:24:38.951 --- 10.0.0.3 ping statistics --- 00:24:38.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.951 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:24:38.951 13:38:44 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:38.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:24:38.951 00:24:38.951 --- 10.0.0.1 ping statistics --- 00:24:38.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.951 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:24:38.951 13:38:44 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.951 13:38:44 -- nvmf/common.sh@421 -- # return 0 00:24:38.951 13:38:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:38.951 13:38:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.951 13:38:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:38.951 13:38:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:38.951 13:38:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.951 13:38:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:38.951 13:38:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:38.951 13:38:44 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:24:38.951 13:38:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:38.951 13:38:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:38.951 13:38:44 -- common/autotest_common.sh@10 -- # set +x 00:24:38.951 13:38:44 -- nvmf/common.sh@469 -- # nvmfpid=100338 00:24:38.951 13:38:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:38.951 13:38:44 -- nvmf/common.sh@470 -- # waitforlisten 100338 00:24:38.951 13:38:44 -- common/autotest_common.sh@829 -- # '[' -z 100338 ']' 00:24:38.951 13:38:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.951 13:38:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:38.951 13:38:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.951 13:38:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:38.951 13:38:44 -- common/autotest_common.sh@10 -- # set +x 00:24:39.210 [2024-12-15 13:38:44.645974] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:39.210 [2024-12-15 13:38:44.646096] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.210 [2024-12-15 13:38:44.787163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:39.210 [2024-12-15 13:38:44.843003] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:39.210 [2024-12-15 13:38:44.843137] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.210 [2024-12-15 13:38:44.843149] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.210 [2024-12-15 13:38:44.843157] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.210 [2024-12-15 13:38:44.843309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.210 [2024-12-15 13:38:44.843561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.146 13:38:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:40.146 13:38:45 -- common/autotest_common.sh@862 -- # return 0 00:24:40.146 13:38:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:40.146 13:38:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:40.146 13:38:45 -- common/autotest_common.sh@10 -- # set +x 00:24:40.146 13:38:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:40.146 13:38:45 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:40.146 13:38:45 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:40.146 [2024-12-15 13:38:45.810062] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:40.146 13:38:45 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:40.405 Malloc0 00:24:40.405 13:38:46 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:40.664 13:38:46 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:40.923 13:38:46 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:41.182 [2024-12-15 13:38:46.762573] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:41.182 13:38:46 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:41.182 13:38:46 -- host/timeout.sh@32 -- # bdevperf_pid=100429 00:24:41.182 13:38:46 -- host/timeout.sh@34 -- # waitforlisten 100429 /var/tmp/bdevperf.sock 00:24:41.182 13:38:46 -- common/autotest_common.sh@829 -- # '[' -z 100429 ']' 00:24:41.182 13:38:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:41.182 13:38:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:41.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:41.182 13:38:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:41.182 13:38:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:41.182 13:38:46 -- common/autotest_common.sh@10 -- # set +x 00:24:41.182 [2024-12-15 13:38:46.820988] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:41.182 [2024-12-15 13:38:46.821065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100429 ] 00:24:41.441 [2024-12-15 13:38:46.953106] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.441 [2024-12-15 13:38:47.017684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:42.378 13:38:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:42.378 13:38:47 -- common/autotest_common.sh@862 -- # return 0 00:24:42.378 13:38:47 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:42.378 13:38:47 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:42.637 NVMe0n1 00:24:42.637 13:38:48 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:42.637 13:38:48 -- host/timeout.sh@51 -- # rpc_pid=100471 00:24:42.637 13:38:48 -- host/timeout.sh@53 -- # sleep 1 00:24:42.896 Running I/O for 10 seconds... 00:24:43.836 13:38:49 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:43.836 [2024-12-15 13:38:49.472041] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472113] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472125] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472133] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472153] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472176] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472184] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472192] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472216] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472238] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472247] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472255] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472262] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472270] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472278] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472285] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472293] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472301] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472308] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472316] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472324] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472332] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472340] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472347] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472355] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472378] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472397] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472404] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472413] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472421] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472429] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472436] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472445] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472453] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472462] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472472] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472480] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472489] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472497] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472504] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472524] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472532] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472540] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472548] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472556] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472579] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472587] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472610] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.472634] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180f490 is same with the state(5) to be set 00:24:43.836 [2024-12-15 13:38:49.473040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.836 [2024-12-15 13:38:49.473071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.836 [2024-12-15 13:38:49.473090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.836 [2024-12-15 13:38:49.473100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.836 [2024-12-15 13:38:49.473110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:92224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.836 [2024-12-15 13:38:49.473119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.836 [2024-12-15 13:38:49.473129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:91616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.836 [2024-12-15 13:38:49.473137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.836 [2024-12-15 13:38:49.473147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.836 [2024-12-15 13:38:49.473155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.836 [2024-12-15 13:38:49.473164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:91648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.837 [2024-12-15 13:38:49.473176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:91656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.837 [2024-12-15 13:38:49.473193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:91672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.837 [2024-12-15 13:38:49.473210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.837 [2024-12-15 13:38:49.473227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:91720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.837 [2024-12-15 13:38:49.473245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:91752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.837 [2024-12-15 13:38:49.473262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.837 [2024-12-15 13:38:49.473280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.837 [2024-12-15 13:38:49.473296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.837 [2024-12-15 13:38:49.473313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:92320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.837 [2024-12-15 13:38:49.473332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:92336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.837 [2024-12-15 13:38:49.473356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.837 [2024-12-15 13:38:49.473381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.837 [2024-12-15 13:38:49.473399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:91760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.837 [2024-12-15 13:38:49.473416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:91784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.837 [2024-12-15 13:38:49.473433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.837 [2024-12-15 13:38:49.473450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:91800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.837 [2024-12-15 13:38:49.473467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:91824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.837 [2024-12-15 13:38:49.473484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.837 [2024-12-15 13:38:49.473500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:91840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.837 [2024-12-15 13:38:49.473517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.837 [2024-12-15 13:38:49.473534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.837 [2024-12-15 13:38:49.473550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.837 [2024-12-15 13:38:49.473568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:92424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.837 [2024-12-15 13:38:49.473638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.837 [2024-12-15 13:38:49.473660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.837 [2024-12-15 13:38:49.473701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.837 [2024-12-15 13:38:49.473720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.837 [2024-12-15 13:38:49.473745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.837 [2024-12-15 13:38:49.473768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.837 [2024-12-15 13:38:49.473788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:92480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.837 [2024-12-15 13:38:49.473807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.837 [2024-12-15 13:38:49.473825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:92496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.837 [2024-12-15 13:38:49.473844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.837 [2024-12-15 13:38:49.473863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.837 [2024-12-15 13:38:49.473873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:92512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.838 [2024-12-15 13:38:49.473881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.473893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.838 [2024-12-15 13:38:49.473901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.473921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.838 [2024-12-15 13:38:49.473956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.473966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:91896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.838 [2024-12-15 13:38:49.473995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.474019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:91904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.838 [2024-12-15 13:38:49.474042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.474052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:91912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.838 [2024-12-15 13:38:49.474061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.474070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:91928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.838 [2024-12-15 13:38:49.474081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.474091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:91944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.838 [2024-12-15 13:38:49.474099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.474114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.838 [2024-12-15 13:38:49.474123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.474140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.838 [2024-12-15 13:38:49.474147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.474157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.838 [2024-12-15 13:38:49.474164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.474173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:92536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.838 [2024-12-15 13:38:49.474180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.474190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:92544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.838 [2024-12-15 13:38:49.474197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.474206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.838 [2024-12-15 13:38:49.474214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.474223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.838 [2024-12-15 13:38:49.474231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.474241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:92568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.838 [2024-12-15 13:38:49.474248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.474257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:92576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.838 [2024-12-15 13:38:49.474265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.474274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.838 [2024-12-15 13:38:49.474281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.474291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:92592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.838 [2024-12-15 13:38:49.474298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.474307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.838 [2024-12-15 13:38:49.474315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.474324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.838 [2024-12-15 13:38:49.474332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.474342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.838 [2024-12-15 13:38:49.474349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.474366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:92624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.838 [2024-12-15 13:38:49.474374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.474384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.838 [2024-12-15 13:38:49.474391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.474401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.838 [2024-12-15 13:38:49.474408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.474418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:92648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.838 [2024-12-15 13:38:49.474426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.474436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.838 [2024-12-15 13:38:49.474446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.474455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.838 [2024-12-15 13:38:49.474463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.474472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.838 [2024-12-15 13:38:49.474479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.474489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.838 [2024-12-15 13:38:49.474496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.474506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.838 [2024-12-15 13:38:49.474513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.474522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.838 [2024-12-15 13:38:49.474530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.474539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.838 [2024-12-15 13:38:49.474547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.838 [2024-12-15 13:38:49.474556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.839 [2024-12-15 13:38:49.474564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.474573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.839 [2024-12-15 13:38:49.474580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.474592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.839 [2024-12-15 13:38:49.474639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.474656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:92672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.839 [2024-12-15 13:38:49.474665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.474676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.839 [2024-12-15 13:38:49.474700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.474710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.839 [2024-12-15 13:38:49.474721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.474732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.839 [2024-12-15 13:38:49.474741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.474752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.839 [2024-12-15 13:38:49.474761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.474772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.839 [2024-12-15 13:38:49.474781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.474800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:92720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.839 [2024-12-15 13:38:49.474809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.474819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:92728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.839 [2024-12-15 13:38:49.474828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.474838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.839 [2024-12-15 13:38:49.474847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.474857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.839 [2024-12-15 13:38:49.474866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.474877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.839 [2024-12-15 13:38:49.474885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.474895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:92760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.839 [2024-12-15 13:38:49.474903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.474914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.839 [2024-12-15 13:38:49.474922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.474932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.839 [2024-12-15 13:38:49.474941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.474951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:92112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.839 [2024-12-15 13:38:49.474959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.474971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.839 [2024-12-15 13:38:49.474980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.475032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.839 [2024-12-15 13:38:49.475040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.475050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.839 [2024-12-15 13:38:49.475058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.475067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:92176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.839 [2024-12-15 13:38:49.475075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.475085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.839 [2024-12-15 13:38:49.475092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.475102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:92768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.839 [2024-12-15 13:38:49.475110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.475119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.839 [2024-12-15 13:38:49.475144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.475153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.839 [2024-12-15 13:38:49.475161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.475170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.839 [2024-12-15 13:38:49.475177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.475187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.839 [2024-12-15 13:38:49.475195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.475204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:92240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.839 [2024-12-15 13:38:49.475211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.475220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.839 [2024-12-15 13:38:49.475228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.475237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.839 [2024-12-15 13:38:49.475245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.475254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:92264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.839 [2024-12-15 13:38:49.475262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.475271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.839 [2024-12-15 13:38:49.475278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.475287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.839 [2024-12-15 13:38:49.475305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.839 [2024-12-15 13:38:49.475315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.839 [2024-12-15 13:38:49.475322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.840 [2024-12-15 13:38:49.475336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.840 [2024-12-15 13:38:49.475346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.840 [2024-12-15 13:38:49.475355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.840 [2024-12-15 13:38:49.475363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.840 [2024-12-15 13:38:49.475372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.840 [2024-12-15 13:38:49.475380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.840 [2024-12-15 13:38:49.475398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.840 [2024-12-15 13:38:49.475406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.840 [2024-12-15 13:38:49.475415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.840 [2024-12-15 13:38:49.475423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.840 [2024-12-15 13:38:49.475432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.840 [2024-12-15 13:38:49.475440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.840 [2024-12-15 13:38:49.475449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.840 [2024-12-15 13:38:49.475456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.840 [2024-12-15 13:38:49.475465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.840 [2024-12-15 13:38:49.475473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.840 [2024-12-15 13:38:49.475483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.840 [2024-12-15 13:38:49.475490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.840 [2024-12-15 13:38:49.475500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:92872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.840 [2024-12-15 13:38:49.475507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.840 [2024-12-15 13:38:49.475517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:92880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.840 [2024-12-15 13:38:49.475524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.840 [2024-12-15 13:38:49.475533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.840 [2024-12-15 13:38:49.475550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.840 [2024-12-15 13:38:49.475560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.840 [2024-12-15 13:38:49.475574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.840 [2024-12-15 13:38:49.475583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:92312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.840 [2024-12-15 13:38:49.475590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.840 [2024-12-15 13:38:49.475620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:92328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.840 [2024-12-15 13:38:49.475629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.840 [2024-12-15 13:38:49.475639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.840 [2024-12-15 13:38:49.475660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.840 [2024-12-15 13:38:49.475675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.840 [2024-12-15 13:38:49.475683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.840 [2024-12-15 13:38:49.475693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.840 [2024-12-15 13:38:49.475701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.840 [2024-12-15 13:38:49.475711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.840 [2024-12-15 13:38:49.475718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.840 [2024-12-15 13:38:49.475728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.840 [2024-12-15 13:38:49.475736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.840 [2024-12-15 13:38:49.475745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab4780 is same with the state(5) to be set 00:24:43.840 [2024-12-15 13:38:49.475765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.840 [2024-12-15 13:38:49.475776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.840 [2024-12-15 13:38:49.475794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92416 len:8 PRP1 0x0 PRP2 0x0 00:24:43.840 [2024-12-15 13:38:49.475802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.840 [2024-12-15 13:38:49.475855] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ab4780 was disconnected and freed. reset controller. 00:24:43.840 [2024-12-15 13:38:49.476006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.840 [2024-12-15 13:38:49.476030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.840 [2024-12-15 13:38:49.476040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.840 [2024-12-15 13:38:49.476048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.840 [2024-12-15 13:38:49.476073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.840 [2024-12-15 13:38:49.476081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.840 [2024-12-15 13:38:49.476090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.840 [2024-12-15 13:38:49.476098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.840 [2024-12-15 13:38:49.476106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2f8c0 is same with the state(5) to be set 00:24:43.840 [2024-12-15 13:38:49.476313] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.840 [2024-12-15 13:38:49.476335] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a2f8c0 (9): Bad file descriptor 00:24:43.840 [2024-12-15 13:38:49.476458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.840 [2024-12-15 13:38:49.476506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.840 [2024-12-15 13:38:49.476523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2f8c0 with addr=10.0.0.2, port=4420 00:24:43.840 [2024-12-15 13:38:49.476539] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2f8c0 is same with the state(5) to be set 00:24:43.840 [2024-12-15 13:38:49.476556] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a2f8c0 (9): Bad file descriptor 00:24:43.840 [2024-12-15 13:38:49.476571] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.840 [2024-12-15 13:38:49.476580] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.840 [2024-12-15 13:38:49.476594] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.840 [2024-12-15 13:38:49.487049] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.840 [2024-12-15 13:38:49.487081] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.840 13:38:49 -- host/timeout.sh@56 -- # sleep 2 00:24:46.375 [2024-12-15 13:38:51.487308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.375 [2024-12-15 13:38:51.487463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.375 [2024-12-15 13:38:51.487498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2f8c0 with addr=10.0.0.2, port=4420 00:24:46.375 [2024-12-15 13:38:51.487512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2f8c0 is same with the state(5) to be set 00:24:46.375 [2024-12-15 13:38:51.487545] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a2f8c0 (9): Bad file descriptor 00:24:46.375 [2024-12-15 13:38:51.487581] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.375 [2024-12-15 13:38:51.487592] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.376 [2024-12-15 13:38:51.487603] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.376 [2024-12-15 13:38:51.487660] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.376 [2024-12-15 13:38:51.487674] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.376 13:38:51 -- host/timeout.sh@57 -- # get_controller 00:24:46.376 13:38:51 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:46.376 13:38:51 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:46.376 13:38:51 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:24:46.376 13:38:51 -- host/timeout.sh@58 -- # get_bdev 00:24:46.376 13:38:51 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:46.376 13:38:51 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:46.376 13:38:52 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:24:46.376 13:38:52 -- host/timeout.sh@61 -- # sleep 5 00:24:48.280 [2024-12-15 13:38:53.487809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.280 [2024-12-15 13:38:53.487908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.281 [2024-12-15 13:38:53.487928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2f8c0 with addr=10.0.0.2, port=4420 00:24:48.281 [2024-12-15 13:38:53.487942] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2f8c0 is same with the state(5) to be set 00:24:48.281 [2024-12-15 13:38:53.487968] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a2f8c0 (9): Bad file descriptor 00:24:48.281 [2024-12-15 13:38:53.487987] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:48.281 [2024-12-15 13:38:53.488012] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:48.281 [2024-12-15 13:38:53.488023] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:48.281 [2024-12-15 13:38:53.488051] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.281 [2024-12-15 13:38:53.488062] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.183 [2024-12-15 13:38:55.488098] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.183 [2024-12-15 13:38:55.488150] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.183 [2024-12-15 13:38:55.488160] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.183 [2024-12-15 13:38:55.488169] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:50.183 [2024-12-15 13:38:55.488190] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.119 00:24:51.119 Latency(us) 00:24:51.119 [2024-12-15T13:38:56.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.119 [2024-12-15T13:38:56.809Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:51.120 Verification LBA range: start 0x0 length 0x4000 00:24:51.120 NVMe0n1 : 8.12 1413.88 5.52 15.76 0.00 89399.33 2695.91 7015926.69 00:24:51.120 [2024-12-15T13:38:56.810Z] =================================================================================================================== 00:24:51.120 [2024-12-15T13:38:56.810Z] Total : 1413.88 5.52 15.76 0.00 89399.33 2695.91 7015926.69 00:24:51.120 0 00:24:51.378 13:38:57 -- host/timeout.sh@62 -- # get_controller 00:24:51.378 13:38:57 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:51.378 13:38:57 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:51.637 13:38:57 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:24:51.637 13:38:57 -- host/timeout.sh@63 -- # get_bdev 00:24:51.637 13:38:57 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:51.637 13:38:57 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:51.896 13:38:57 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:24:51.896 13:38:57 -- host/timeout.sh@65 -- # wait 100471 00:24:51.896 13:38:57 -- host/timeout.sh@67 -- # killprocess 100429 00:24:51.896 13:38:57 -- common/autotest_common.sh@936 -- # '[' -z 100429 ']' 00:24:51.896 13:38:57 -- common/autotest_common.sh@940 -- # kill -0 100429 00:24:51.896 13:38:57 -- common/autotest_common.sh@941 -- # uname 00:24:51.896 13:38:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:51.896 13:38:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100429 00:24:52.155 13:38:57 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:52.155 13:38:57 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:52.155 killing process with pid 100429 00:24:52.155 13:38:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100429' 00:24:52.155 Received shutdown signal, test time was about 9.240122 seconds 00:24:52.155 00:24:52.155 Latency(us) 00:24:52.155 [2024-12-15T13:38:57.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:52.155 [2024-12-15T13:38:57.845Z] =================================================================================================================== 00:24:52.155 [2024-12-15T13:38:57.845Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:52.155 13:38:57 -- common/autotest_common.sh@955 -- # kill 100429 00:24:52.155 13:38:57 -- common/autotest_common.sh@960 -- # wait 100429 00:24:52.155 13:38:57 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:52.414 [2024-12-15 13:38:58.023758] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:52.414 13:38:58 -- host/timeout.sh@74 -- # bdevperf_pid=100630 00:24:52.414 13:38:58 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:52.414 13:38:58 -- host/timeout.sh@76 -- # waitforlisten 100630 /var/tmp/bdevperf.sock 00:24:52.414 13:38:58 -- common/autotest_common.sh@829 -- # '[' -z 100630 ']' 00:24:52.414 13:38:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:52.414 13:38:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:52.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:52.414 13:38:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:52.414 13:38:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:52.414 13:38:58 -- common/autotest_common.sh@10 -- # set +x 00:24:52.414 [2024-12-15 13:38:58.086871] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:52.414 [2024-12-15 13:38:58.086960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100630 ] 00:24:52.673 [2024-12-15 13:38:58.219833] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.673 [2024-12-15 13:38:58.291688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:53.610 13:38:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:53.610 13:38:59 -- common/autotest_common.sh@862 -- # return 0 00:24:53.610 13:38:59 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:53.610 13:38:59 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:24:54.177 NVMe0n1 00:24:54.177 13:38:59 -- host/timeout.sh@84 -- # rpc_pid=100676 00:24:54.177 13:38:59 -- host/timeout.sh@86 -- # sleep 1 00:24:54.177 13:38:59 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:54.177 Running I/O for 10 seconds... 00:24:55.113 13:39:00 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:55.374 [2024-12-15 13:39:00.941542] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941662] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941676] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941685] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941693] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941702] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941711] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941720] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941729] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941737] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941745] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941753] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941762] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941770] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941779] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941787] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941795] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941803] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941812] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941820] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941829] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941837] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941845] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941853] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941861] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941869] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941878] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941885] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941896] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941904] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941912] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941921] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941929] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941954] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941964] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941973] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.941981] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.942004] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.942027] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.942036] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.942044] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.942054] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.942062] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.942070] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.942077] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.942085] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.942094] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.942102] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.374 [2024-12-15 13:39:00.942109] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.375 [2024-12-15 13:39:00.942117] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.375 [2024-12-15 13:39:00.942125] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.375 [2024-12-15 13:39:00.942132] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.375 [2024-12-15 13:39:00.942140] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.375 [2024-12-15 13:39:00.942151] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.375 [2024-12-15 13:39:00.942158] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.375 [2024-12-15 13:39:00.942167] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.375 [2024-12-15 13:39:00.942174] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.375 [2024-12-15 13:39:00.942182] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b4ca0 is same with the state(5) to be set 00:24:55.375 [2024-12-15 13:39:00.942458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.375 [2024-12-15 13:39:00.942486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.375 [2024-12-15 13:39:00.942507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.375 [2024-12-15 13:39:00.942518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.375 [2024-12-15 13:39:00.942528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.375 [2024-12-15 13:39:00.942537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.375 [2024-12-15 13:39:00.942548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.375 [2024-12-15 13:39:00.942556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.375 [2024-12-15 13:39:00.942566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.375 [2024-12-15 13:39:00.942575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.375 [2024-12-15 13:39:00.942586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.375 [2024-12-15 13:39:00.942595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.375 [2024-12-15 13:39:00.942622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.375 [2024-12-15 13:39:00.942631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.375 [2024-12-15 13:39:00.942642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.375 [2024-12-15 13:39:00.942651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.375 [2024-12-15 13:39:00.942675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.375 [2024-12-15 13:39:00.942687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.375 [2024-12-15 13:39:00.942698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.375 [2024-12-15 13:39:00.942708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.375 [2024-12-15 13:39:00.942719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.375 [2024-12-15 13:39:00.942728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.375 [2024-12-15 13:39:00.942741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.375 [2024-12-15 13:39:00.942750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.375 [2024-12-15 13:39:00.942771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.375 [2024-12-15 13:39:00.942780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.375 [2024-12-15 13:39:00.942791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.375 [2024-12-15 13:39:00.942801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.375 [2024-12-15 13:39:00.942812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.375 [2024-12-15 13:39:00.942821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.375 [2024-12-15 13:39:00.942831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.375 [2024-12-15 13:39:00.942841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.375 [2024-12-15 13:39:00.942852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.375 [2024-12-15 13:39:00.942863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.375 [2024-12-15 13:39:00.942876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.375 [2024-12-15 13:39:00.942885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.375 [2024-12-15 13:39:00.942896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.375 [2024-12-15 13:39:00.942906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.375 [2024-12-15 13:39:00.942918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.375 [2024-12-15 13:39:00.942927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.375 [2024-12-15 13:39:00.942938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.375 [2024-12-15 13:39:00.942948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.375 [2024-12-15 13:39:00.942975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.375 [2024-12-15 13:39:00.942999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.375 [2024-12-15 13:39:00.943010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.375 [2024-12-15 13:39:00.943019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.375 [2024-12-15 13:39:00.943030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.375 [2024-12-15 13:39:00.943039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.375 [2024-12-15 13:39:00.943050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.375 [2024-12-15 13:39:00.943059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.375 [2024-12-15 13:39:00.943070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.375 [2024-12-15 13:39:00.943094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.375 [2024-12-15 13:39:00.943104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.375 [2024-12-15 13:39:00.943113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.375 [2024-12-15 13:39:00.943123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.375 [2024-12-15 13:39:00.943132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.375 [2024-12-15 13:39:00.943142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.375 [2024-12-15 13:39:00.943151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.375 [2024-12-15 13:39:00.943162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.375 [2024-12-15 13:39:00.943171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.375 [2024-12-15 13:39:00.943182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.376 [2024-12-15 13:39:00.943191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.376 [2024-12-15 13:39:00.943216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.376 [2024-12-15 13:39:00.943235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.376 [2024-12-15 13:39:00.943258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.376 [2024-12-15 13:39:00.943277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.376 [2024-12-15 13:39:00.943301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.376 [2024-12-15 13:39:00.943320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.376 [2024-12-15 13:39:00.943339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.376 [2024-12-15 13:39:00.943358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.376 [2024-12-15 13:39:00.943377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.376 [2024-12-15 13:39:00.943396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.376 [2024-12-15 13:39:00.943421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.376 [2024-12-15 13:39:00.943441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.376 [2024-12-15 13:39:00.943459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.376 [2024-12-15 13:39:00.943479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.376 [2024-12-15 13:39:00.943498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.376 [2024-12-15 13:39:00.943517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.376 [2024-12-15 13:39:00.943536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.376 [2024-12-15 13:39:00.943555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.376 [2024-12-15 13:39:00.943574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.376 [2024-12-15 13:39:00.943592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.376 [2024-12-15 13:39:00.943636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.376 [2024-12-15 13:39:00.943668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.376 [2024-12-15 13:39:00.943689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.376 [2024-12-15 13:39:00.943710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.376 [2024-12-15 13:39:00.943730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.376 [2024-12-15 13:39:00.943750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.376 [2024-12-15 13:39:00.943776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.376 [2024-12-15 13:39:00.943797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.376 [2024-12-15 13:39:00.943818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.376 [2024-12-15 13:39:00.943839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.376 [2024-12-15 13:39:00.943860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.376 [2024-12-15 13:39:00.943880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.376 [2024-12-15 13:39:00.943901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.376 [2024-12-15 13:39:00.943922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.376 [2024-12-15 13:39:00.943933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.377 [2024-12-15 13:39:00.943943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.943954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.377 [2024-12-15 13:39:00.943979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.377 [2024-12-15 13:39:00.944013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.377 [2024-12-15 13:39:00.944033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.377 [2024-12-15 13:39:00.944053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.377 [2024-12-15 13:39:00.944072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.377 [2024-12-15 13:39:00.944091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.377 [2024-12-15 13:39:00.944125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.377 [2024-12-15 13:39:00.944149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.377 [2024-12-15 13:39:00.944168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.377 [2024-12-15 13:39:00.944192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.377 [2024-12-15 13:39:00.944211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.377 [2024-12-15 13:39:00.944231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.377 [2024-12-15 13:39:00.944250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.377 [2024-12-15 13:39:00.944268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.377 [2024-12-15 13:39:00.944287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.377 [2024-12-15 13:39:00.944305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.377 [2024-12-15 13:39:00.944324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.377 [2024-12-15 13:39:00.944343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.377 [2024-12-15 13:39:00.944361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.377 [2024-12-15 13:39:00.944386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.377 [2024-12-15 13:39:00.944405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.377 [2024-12-15 13:39:00.944423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.377 [2024-12-15 13:39:00.944442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.377 [2024-12-15 13:39:00.944465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.377 [2024-12-15 13:39:00.944484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.377 [2024-12-15 13:39:00.944507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.377 [2024-12-15 13:39:00.944526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.377 [2024-12-15 13:39:00.944545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.377 [2024-12-15 13:39:00.944569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.377 [2024-12-15 13:39:00.944588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.377 [2024-12-15 13:39:00.944623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.377 [2024-12-15 13:39:00.944656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.377 [2024-12-15 13:39:00.944681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.377 [2024-12-15 13:39:00.944692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.378 [2024-12-15 13:39:00.944701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.378 [2024-12-15 13:39:00.944713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.378 [2024-12-15 13:39:00.944722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.378 [2024-12-15 13:39:00.944733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.378 [2024-12-15 13:39:00.944742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.378 [2024-12-15 13:39:00.944753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.378 [2024-12-15 13:39:00.944763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.378 [2024-12-15 13:39:00.944774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.378 [2024-12-15 13:39:00.944783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.378 [2024-12-15 13:39:00.944794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.378 [2024-12-15 13:39:00.944803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.378 [2024-12-15 13:39:00.944819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.378 [2024-12-15 13:39:00.944829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.378 [2024-12-15 13:39:00.944840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.378 [2024-12-15 13:39:00.944850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.378 [2024-12-15 13:39:00.944861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.378 [2024-12-15 13:39:00.944875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.378 [2024-12-15 13:39:00.944887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.378 [2024-12-15 13:39:00.944898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.378 [2024-12-15 13:39:00.944909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.378 [2024-12-15 13:39:00.944919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.378 [2024-12-15 13:39:00.944930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.378 [2024-12-15 13:39:00.944939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.378 [2024-12-15 13:39:00.944960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.378 [2024-12-15 13:39:00.944995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.378 [2024-12-15 13:39:00.945016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.378 [2024-12-15 13:39:00.945025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.378 [2024-12-15 13:39:00.945036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.378 [2024-12-15 13:39:00.945045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.378 [2024-12-15 13:39:00.945056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.378 [2024-12-15 13:39:00.945065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.378 [2024-12-15 13:39:00.945075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.378 [2024-12-15 13:39:00.945084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.378 [2024-12-15 13:39:00.945095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.378 [2024-12-15 13:39:00.945104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.378 [2024-12-15 13:39:00.945130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.378 [2024-12-15 13:39:00.945139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.378 [2024-12-15 13:39:00.945149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.378 [2024-12-15 13:39:00.945158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.378 [2024-12-15 13:39:00.945169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.378 [2024-12-15 13:39:00.945177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.378 [2024-12-15 13:39:00.945188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.378 [2024-12-15 13:39:00.945196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.378 [2024-12-15 13:39:00.945212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.378 [2024-12-15 13:39:00.945221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.378 [2024-12-15 13:39:00.945231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.378 [2024-12-15 13:39:00.945240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.378 [2024-12-15 13:39:00.945250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.378 [2024-12-15 13:39:00.945265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.378 [2024-12-15 13:39:00.945276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.378 [2024-12-15 13:39:00.945300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.378 [2024-12-15 13:39:00.945311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.378 [2024-12-15 13:39:00.945319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.378 [2024-12-15 13:39:00.945330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.378 [2024-12-15 13:39:00.945338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.378 [2024-12-15 13:39:00.945348] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2b660 is same with the state(5) to be set 00:24:55.378 [2024-12-15 13:39:00.945359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:55.378 [2024-12-15 13:39:00.945367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:55.378 [2024-12-15 13:39:00.945375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4048 len:8 PRP1 0x0 PRP2 0x0 00:24:55.378 [2024-12-15 13:39:00.945383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.378 [2024-12-15 13:39:00.945439] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc2b660 was disconnected and freed. reset controller. 00:24:55.378 [2024-12-15 13:39:00.945750] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.378 [2024-12-15 13:39:00.945830] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba68c0 (9): Bad file descriptor 00:24:55.378 [2024-12-15 13:39:00.945936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.378 [2024-12-15 13:39:00.946002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.378 [2024-12-15 13:39:00.946018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba68c0 with addr=10.0.0.2, port=4420 00:24:55.378 [2024-12-15 13:39:00.946029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba68c0 is same with the state(5) to be set 00:24:55.378 [2024-12-15 13:39:00.946047] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba68c0 (9): Bad file descriptor 00:24:55.378 [2024-12-15 13:39:00.946064] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.378 [2024-12-15 13:39:00.946073] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.378 [2024-12-15 13:39:00.946083] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.379 [2024-12-15 13:39:00.946102] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.379 [2024-12-15 13:39:00.946113] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.379 13:39:00 -- host/timeout.sh@90 -- # sleep 1 00:24:56.315 [2024-12-15 13:39:01.946181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.315 [2024-12-15 13:39:01.946266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.315 [2024-12-15 13:39:01.946284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba68c0 with addr=10.0.0.2, port=4420 00:24:56.315 [2024-12-15 13:39:01.946294] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba68c0 is same with the state(5) to be set 00:24:56.315 [2024-12-15 13:39:01.946312] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba68c0 (9): Bad file descriptor 00:24:56.315 [2024-12-15 13:39:01.946327] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.315 [2024-12-15 13:39:01.946335] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.315 [2024-12-15 13:39:01.946344] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.315 [2024-12-15 13:39:01.946362] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.315 [2024-12-15 13:39:01.946371] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.315 13:39:01 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:56.574 [2024-12-15 13:39:02.259370] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:56.833 13:39:02 -- host/timeout.sh@92 -- # wait 100676 00:24:57.400 [2024-12-15 13:39:02.965423] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:05.587 00:25:05.587 Latency(us) 00:25:05.587 [2024-12-15T13:39:11.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.587 [2024-12-15T13:39:11.277Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:05.587 Verification LBA range: start 0x0 length 0x4000 00:25:05.587 NVMe0n1 : 10.01 10853.25 42.40 0.00 0.00 11775.16 1117.09 3019898.88 00:25:05.587 [2024-12-15T13:39:11.277Z] =================================================================================================================== 00:25:05.587 [2024-12-15T13:39:11.277Z] Total : 10853.25 42.40 0.00 0.00 11775.16 1117.09 3019898.88 00:25:05.587 0 00:25:05.587 13:39:09 -- host/timeout.sh@97 -- # rpc_pid=100794 00:25:05.587 13:39:09 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:05.587 13:39:09 -- host/timeout.sh@98 -- # sleep 1 00:25:05.587 Running I/O for 10 seconds... 00:25:05.587 13:39:10 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:05.587 [2024-12-15 13:39:11.108276] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108341] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108370] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108378] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108385] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108393] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108401] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108409] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108417] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108424] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108432] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108439] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108447] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108454] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108461] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108468] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108475] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108482] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108489] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108497] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108504] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108512] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108519] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108526] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108533] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108540] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108547] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108554] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108561] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108568] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108575] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108582] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108589] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108597] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108648] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108673] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108683] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108691] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108699] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108708] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108716] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108724] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108732] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108742] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108750] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108757] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108765] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108773] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108781] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108789] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108797] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108805] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108813] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108821] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.587 [2024-12-15 13:39:11.108828] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.588 [2024-12-15 13:39:11.108836] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.588 [2024-12-15 13:39:11.108844] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.588 [2024-12-15 13:39:11.108852] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.588 [2024-12-15 13:39:11.108859] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.588 [2024-12-15 13:39:11.108867] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.588 [2024-12-15 13:39:11.108875] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.588 [2024-12-15 13:39:11.108882] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.588 [2024-12-15 13:39:11.108890] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.588 [2024-12-15 13:39:11.108898] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.588 [2024-12-15 13:39:11.108907] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.588 [2024-12-15 13:39:11.108915] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.588 [2024-12-15 13:39:11.108924] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.588 [2024-12-15 13:39:11.108932] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.588 [2024-12-15 13:39:11.108940] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.588 [2024-12-15 13:39:11.108949] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810110 is same with the state(5) to be set 00:25:05.588 [2024-12-15 13:39:11.109192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.588 [2024-12-15 13:39:11.109235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.588 [2024-12-15 13:39:11.109258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.588 [2024-12-15 13:39:11.109269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.588 [2024-12-15 13:39:11.109280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.588 [2024-12-15 13:39:11.109291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.588 [2024-12-15 13:39:11.109302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.588 [2024-12-15 13:39:11.109311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.588 [2024-12-15 13:39:11.109322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.588 [2024-12-15 13:39:11.109331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.588 [2024-12-15 13:39:11.109342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.588 [2024-12-15 13:39:11.109350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.588 [2024-12-15 13:39:11.109361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.588 [2024-12-15 13:39:11.109370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.588 [2024-12-15 13:39:11.109381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.588 [2024-12-15 13:39:11.109390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.588 [2024-12-15 13:39:11.109400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.588 [2024-12-15 13:39:11.109409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.588 [2024-12-15 13:39:11.109420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.588 [2024-12-15 13:39:11.109428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.588 [2024-12-15 13:39:11.109439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.588 [2024-12-15 13:39:11.109448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.588 [2024-12-15 13:39:11.109459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.588 [2024-12-15 13:39:11.109468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.588 [2024-12-15 13:39:11.109478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.588 [2024-12-15 13:39:11.109487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.588 [2024-12-15 13:39:11.109497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.588 [2024-12-15 13:39:11.109506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.588 [2024-12-15 13:39:11.109516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.588 [2024-12-15 13:39:11.109525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.588 [2024-12-15 13:39:11.109535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.588 [2024-12-15 13:39:11.109544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.588 [2024-12-15 13:39:11.109555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.588 [2024-12-15 13:39:11.109566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.588 [2024-12-15 13:39:11.109577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.588 [2024-12-15 13:39:11.109606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.588 [2024-12-15 13:39:11.109638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.588 [2024-12-15 13:39:11.109648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.588 [2024-12-15 13:39:11.109659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.588 [2024-12-15 13:39:11.109668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.588 [2024-12-15 13:39:11.109679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.588 [2024-12-15 13:39:11.109697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.588 [2024-12-15 13:39:11.109709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.588 [2024-12-15 13:39:11.109718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.588 [2024-12-15 13:39:11.109730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.588 [2024-12-15 13:39:11.109739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.588 [2024-12-15 13:39:11.109750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.588 [2024-12-15 13:39:11.109759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.588 [2024-12-15 13:39:11.109770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.588 [2024-12-15 13:39:11.109779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.588 [2024-12-15 13:39:11.109797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.588 [2024-12-15 13:39:11.109806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.588 [2024-12-15 13:39:11.109817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.588 [2024-12-15 13:39:11.109826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.588 [2024-12-15 13:39:11.109837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.109847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.109859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.109868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.109879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.109888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.109899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.109908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.109934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.109942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.109953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.109962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.109973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.109982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.109992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.110001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.110012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.110021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.110031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.110041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.110052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.110061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.110072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.110081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.110091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.110100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.110111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.110120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.110130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.110139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.110150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.110158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.110169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.110177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.110188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.110197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.110207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.110216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.110227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.110235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.110246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.589 [2024-12-15 13:39:11.110255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.110266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.110275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.110285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.110294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.110304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.110313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.110323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.110332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.110342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.110352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.110364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.110373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.110384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.110393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.110404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.110413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.110424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.110432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.110443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.110452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.110462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.110471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.110481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.110490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.110501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.110510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.110520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.589 [2024-12-15 13:39:11.110529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.589 [2024-12-15 13:39:11.110540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.590 [2024-12-15 13:39:11.110549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.110566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.590 [2024-12-15 13:39:11.110576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.110586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-12-15 13:39:11.110595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.110641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-12-15 13:39:11.110652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.110663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-12-15 13:39:11.110671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.110682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-12-15 13:39:11.110691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.110701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-12-15 13:39:11.110712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.110723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-12-15 13:39:11.110732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.110743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-12-15 13:39:11.110752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.110762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-12-15 13:39:11.110772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.110783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-12-15 13:39:11.110792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.110803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-12-15 13:39:11.110812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.110822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.590 [2024-12-15 13:39:11.110831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.110842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.590 [2024-12-15 13:39:11.110851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.110861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.590 [2024-12-15 13:39:11.110870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.110881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-12-15 13:39:11.110890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.110900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.590 [2024-12-15 13:39:11.110910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.110925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.590 [2024-12-15 13:39:11.110934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.110945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-12-15 13:39:11.110954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.110966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-12-15 13:39:11.110974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.110985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.590 [2024-12-15 13:39:11.110994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.111004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-12-15 13:39:11.111013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.111024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.590 [2024-12-15 13:39:11.111033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.111044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-12-15 13:39:11.111053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.111064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.590 [2024-12-15 13:39:11.111073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.111084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.590 [2024-12-15 13:39:11.111093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.111103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.590 [2024-12-15 13:39:11.111113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.111123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.590 [2024-12-15 13:39:11.111132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.111142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.590 [2024-12-15 13:39:11.111151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.111162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-12-15 13:39:11.111171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.111183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.590 [2024-12-15 13:39:11.111192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.111203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.590 [2024-12-15 13:39:11.111212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.111223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-12-15 13:39:11.111232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.111243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-12-15 13:39:11.111257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.590 [2024-12-15 13:39:11.111268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.590 [2024-12-15 13:39:11.111278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.591 [2024-12-15 13:39:11.111288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.591 [2024-12-15 13:39:11.111297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.591 [2024-12-15 13:39:11.111308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.591 [2024-12-15 13:39:11.111317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.591 [2024-12-15 13:39:11.111327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.591 [2024-12-15 13:39:11.111336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.591 [2024-12-15 13:39:11.111346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.591 [2024-12-15 13:39:11.111355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.591 [2024-12-15 13:39:11.111365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.591 [2024-12-15 13:39:11.111374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.591 [2024-12-15 13:39:11.111385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.591 [2024-12-15 13:39:11.111393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.591 [2024-12-15 13:39:11.111404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.591 [2024-12-15 13:39:11.111413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.591 [2024-12-15 13:39:11.111423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.591 [2024-12-15 13:39:11.111432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.591 [2024-12-15 13:39:11.111443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.591 [2024-12-15 13:39:11.111451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.591 [2024-12-15 13:39:11.111461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.591 [2024-12-15 13:39:11.111470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.591 [2024-12-15 13:39:11.111481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.591 [2024-12-15 13:39:11.111490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.591 [2024-12-15 13:39:11.111500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.591 [2024-12-15 13:39:11.111508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.591 [2024-12-15 13:39:11.111518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.591 [2024-12-15 13:39:11.111527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.591 [2024-12-15 13:39:11.111537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.591 [2024-12-15 13:39:11.111546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.591 [2024-12-15 13:39:11.111557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.591 [2024-12-15 13:39:11.111571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.591 [2024-12-15 13:39:11.111582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.591 [2024-12-15 13:39:11.111602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.591 [2024-12-15 13:39:11.111613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.591 [2024-12-15 13:39:11.111623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.591 [2024-12-15 13:39:11.111633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.591 [2024-12-15 13:39:11.111642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.591 [2024-12-15 13:39:11.111653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.591 [2024-12-15 13:39:11.111662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.591 [2024-12-15 13:39:11.111673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.591 [2024-12-15 13:39:11.111681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.591 [2024-12-15 13:39:11.111692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.591 [2024-12-15 13:39:11.111701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.591 [2024-12-15 13:39:11.111711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.591 [2024-12-15 13:39:11.111720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.591 [2024-12-15 13:39:11.111731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.591 [2024-12-15 13:39:11.111739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.591 [2024-12-15 13:39:11.111750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.591 [2024-12-15 13:39:11.111759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.591 [2024-12-15 13:39:11.111769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.591 [2024-12-15 13:39:11.111778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.591 [2024-12-15 13:39:11.111789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.591 [2024-12-15 13:39:11.111798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.591 [2024-12-15 13:39:11.111808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.591 [2024-12-15 13:39:11.111817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.591 [2024-12-15 13:39:11.111828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.591 [2024-12-15 13:39:11.111837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.591 [2024-12-15 13:39:11.111848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.591 [2024-12-15 13:39:11.111856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.591 [2024-12-15 13:39:11.111867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.591 [2024-12-15 13:39:11.111875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.591 [2024-12-15 13:39:11.111886] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf71d0 is same with the state(5) to be set 00:25:05.591 [2024-12-15 13:39:11.111903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:05.591 [2024-12-15 13:39:11.111911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:05.591 [2024-12-15 13:39:11.111919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6672 len:8 PRP1 0x0 PRP2 0x0 00:25:05.591 [2024-12-15 13:39:11.111928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.591 [2024-12-15 13:39:11.111981] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbf71d0 was disconnected and freed. reset controller. 00:25:05.591 [2024-12-15 13:39:11.112264] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:05.591 [2024-12-15 13:39:11.112339] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba68c0 (9): Bad file descriptor 00:25:05.591 [2024-12-15 13:39:11.112455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-12-15 13:39:11.112521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.591 [2024-12-15 13:39:11.112538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba68c0 with addr=10.0.0.2, port=4420 00:25:05.592 [2024-12-15 13:39:11.112549] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba68c0 is same with the state(5) to be set 00:25:05.592 [2024-12-15 13:39:11.112568] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba68c0 (9): Bad file descriptor 00:25:05.592 [2024-12-15 13:39:11.112601] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:05.592 [2024-12-15 13:39:11.112629] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:05.592 [2024-12-15 13:39:11.112640] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:05.592 [2024-12-15 13:39:11.112662] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:05.592 [2024-12-15 13:39:11.112673] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:05.592 13:39:11 -- host/timeout.sh@101 -- # sleep 3 00:25:06.526 [2024-12-15 13:39:12.112758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.526 [2024-12-15 13:39:12.112862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.526 [2024-12-15 13:39:12.112880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba68c0 with addr=10.0.0.2, port=4420 00:25:06.526 [2024-12-15 13:39:12.112891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba68c0 is same with the state(5) to be set 00:25:06.526 [2024-12-15 13:39:12.112912] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba68c0 (9): Bad file descriptor 00:25:06.526 [2024-12-15 13:39:12.112929] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:06.526 [2024-12-15 13:39:12.112938] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:06.526 [2024-12-15 13:39:12.112947] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:06.526 [2024-12-15 13:39:12.112968] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:06.526 [2024-12-15 13:39:12.112978] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:07.460 [2024-12-15 13:39:13.113039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.460 [2024-12-15 13:39:13.113137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.460 [2024-12-15 13:39:13.113154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba68c0 with addr=10.0.0.2, port=4420 00:25:07.460 [2024-12-15 13:39:13.113165] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba68c0 is same with the state(5) to be set 00:25:07.460 [2024-12-15 13:39:13.113182] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba68c0 (9): Bad file descriptor 00:25:07.460 [2024-12-15 13:39:13.113198] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:07.460 [2024-12-15 13:39:13.113206] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:07.460 [2024-12-15 13:39:13.113214] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:07.460 [2024-12-15 13:39:13.113231] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:07.460 [2024-12-15 13:39:13.113242] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:08.835 [2024-12-15 13:39:14.115211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.835 [2024-12-15 13:39:14.115308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.835 [2024-12-15 13:39:14.115325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba68c0 with addr=10.0.0.2, port=4420 00:25:08.835 [2024-12-15 13:39:14.115335] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba68c0 is same with the state(5) to be set 00:25:08.835 [2024-12-15 13:39:14.115461] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba68c0 (9): Bad file descriptor 00:25:08.835 [2024-12-15 13:39:14.115694] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:08.835 [2024-12-15 13:39:14.115714] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:08.835 [2024-12-15 13:39:14.115724] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:08.836 [2024-12-15 13:39:14.118125] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.836 [2024-12-15 13:39:14.118171] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:08.836 13:39:14 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:08.836 [2024-12-15 13:39:14.359760] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:08.836 13:39:14 -- host/timeout.sh@103 -- # wait 100794 00:25:09.769 [2024-12-15 13:39:15.142247] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:15.034 00:25:15.034 Latency(us) 00:25:15.034 [2024-12-15T13:39:20.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.034 [2024-12-15T13:39:20.724Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:15.034 Verification LBA range: start 0x0 length 0x4000 00:25:15.034 NVMe0n1 : 10.01 9015.75 35.22 6699.67 0.00 8131.15 696.32 3019898.88 00:25:15.034 [2024-12-15T13:39:20.724Z] =================================================================================================================== 00:25:15.034 [2024-12-15T13:39:20.724Z] Total : 9015.75 35.22 6699.67 0.00 8131.15 0.00 3019898.88 00:25:15.034 0 00:25:15.034 13:39:19 -- host/timeout.sh@105 -- # killprocess 100630 00:25:15.034 13:39:19 -- common/autotest_common.sh@936 -- # '[' -z 100630 ']' 00:25:15.034 13:39:19 -- common/autotest_common.sh@940 -- # kill -0 100630 00:25:15.034 13:39:19 -- common/autotest_common.sh@941 -- # uname 00:25:15.034 13:39:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:15.034 13:39:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100630 00:25:15.034 13:39:20 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:15.034 13:39:20 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:15.034 killing process with pid 100630 00:25:15.034 13:39:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100630' 00:25:15.034 Received shutdown signal, test time was about 10.000000 seconds 00:25:15.034 00:25:15.034 Latency(us) 00:25:15.034 [2024-12-15T13:39:20.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.034 [2024-12-15T13:39:20.724Z] =================================================================================================================== 00:25:15.034 [2024-12-15T13:39:20.724Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:15.034 13:39:20 -- common/autotest_common.sh@955 -- # kill 100630 00:25:15.034 13:39:20 -- common/autotest_common.sh@960 -- # wait 100630 00:25:15.034 13:39:20 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:25:15.034 13:39:20 -- host/timeout.sh@110 -- # bdevperf_pid=100925 00:25:15.034 13:39:20 -- host/timeout.sh@112 -- # waitforlisten 100925 /var/tmp/bdevperf.sock 00:25:15.034 13:39:20 -- common/autotest_common.sh@829 -- # '[' -z 100925 ']' 00:25:15.034 13:39:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:15.034 13:39:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:15.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:15.034 13:39:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:15.034 13:39:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:15.034 13:39:20 -- common/autotest_common.sh@10 -- # set +x 00:25:15.034 [2024-12-15 13:39:20.347228] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:15.034 [2024-12-15 13:39:20.347316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100925 ] 00:25:15.034 [2024-12-15 13:39:20.476059] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.034 [2024-12-15 13:39:20.571774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:15.970 13:39:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:15.970 13:39:21 -- common/autotest_common.sh@862 -- # return 0 00:25:15.970 13:39:21 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 100925 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:25:15.970 13:39:21 -- host/timeout.sh@116 -- # dtrace_pid=100949 00:25:15.970 13:39:21 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:25:15.970 13:39:21 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:25:16.228 NVMe0n1 00:25:16.228 13:39:21 -- host/timeout.sh@124 -- # rpc_pid=101002 00:25:16.228 13:39:21 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:16.228 13:39:21 -- host/timeout.sh@125 -- # sleep 1 00:25:16.486 Running I/O for 10 seconds... 00:25:17.421 13:39:22 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:17.682 [2024-12-15 13:39:23.120914] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.682 [2024-12-15 13:39:23.120954] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.682 [2024-12-15 13:39:23.120975] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.682 [2024-12-15 13:39:23.120983] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.682 [2024-12-15 13:39:23.120990] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.682 [2024-12-15 13:39:23.120997] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.682 [2024-12-15 13:39:23.121005] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.682 [2024-12-15 13:39:23.121012] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.682 [2024-12-15 13:39:23.121019] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.682 [2024-12-15 13:39:23.121026] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.682 [2024-12-15 13:39:23.121033] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121040] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121047] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121053] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121060] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121067] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121073] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121079] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121086] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121092] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121099] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121106] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121113] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121119] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121126] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121132] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121139] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121146] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121154] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121161] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121168] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121175] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121182] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121190] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121197] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121204] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121211] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121218] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121225] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121232] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121239] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121246] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121252] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121259] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121265] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121271] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121278] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121284] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121291] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121297] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121304] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121310] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121317] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121324] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121331] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121339] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121346] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121354] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121361] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121368] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121375] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121382] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121389] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121395] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121402] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121408] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1813ba0 is same with the state(5) to be set 00:25:17.683 [2024-12-15 13:39:23.121806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:123216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.683 [2024-12-15 13:39:23.121863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.683 [2024-12-15 13:39:23.121888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.683 [2024-12-15 13:39:23.121899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.683 [2024-12-15 13:39:23.121910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:43824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.683 [2024-12-15 13:39:23.121920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.683 [2024-12-15 13:39:23.121942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.683 [2024-12-15 13:39:23.121975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.683 [2024-12-15 13:39:23.121996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.683 [2024-12-15 13:39:23.122019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.683 [2024-12-15 13:39:23.122036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.683 [2024-12-15 13:39:23.122044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.683 [2024-12-15 13:39:23.122062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:123192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.683 [2024-12-15 13:39:23.122085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.683 [2024-12-15 13:39:23.122094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.683 [2024-12-15 13:39:23.122103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:68400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:36112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:29184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:71256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:33344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:51976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:55192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:31464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:28112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:34656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:69032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:54608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:32128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.684 [2024-12-15 13:39:23.122729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.684 [2024-12-15 13:39:23.122738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.122746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.122755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.122763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.122772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.122779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.122788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:90328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.122796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.122806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.122813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.122822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.122830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.122840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:116320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.122848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.122857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.122865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.122891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.122899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.122908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.122925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.122935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:124736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.122943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.122953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:34104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.122961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.122979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.122987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.122996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:52168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.123004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.123014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.123021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.123031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:33088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.123039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.123048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:107832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.123056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.123066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:106480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.123074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.123083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.123091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.123101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:67328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.123108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.123117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.123126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.123140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:49872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.123147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.123157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.123165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.123174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.123183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.123193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:33312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.123201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.123226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:51288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.123233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.123242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:47440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.123249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.123258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:68152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.123266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.123280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:28224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.123288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.123297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.123305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.123314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.123324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.123333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:89784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.123340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.123365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:33600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.123373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.123382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.123389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.123398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.685 [2024-12-15 13:39:23.123406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.685 [2024-12-15 13:39:23.123415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:101128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.123422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.123430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.123437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.123452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:74696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.123459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.123468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.123475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.123484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:56416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.123491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.123499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.123507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.123515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:105024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.123522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.123531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:33400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.123538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.123548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.123571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.123586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.123594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.123603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.123611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.123621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:129144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.123628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.123637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:26328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.123652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.123663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:116416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.123670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.123679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.123686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.123695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.123703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.123713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.123720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.123730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.123738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.123753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.123762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.123772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.123780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.123789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.123797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.123807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:114520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.123814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.123824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:61032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.123831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.123840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.123848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.123857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.123864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.123879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.123886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.123896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.123904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.123913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:70176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.123921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.123945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.123952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.123961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.123968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.123977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.123984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.123993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.124000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.124008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.124015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.124023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:27384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.124030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.686 [2024-12-15 13:39:23.124046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.686 [2024-12-15 13:39:23.124053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.687 [2024-12-15 13:39:23.124062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.687 [2024-12-15 13:39:23.124069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.687 [2024-12-15 13:39:23.124078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.687 [2024-12-15 13:39:23.124085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.687 [2024-12-15 13:39:23.124093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:125424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.687 [2024-12-15 13:39:23.124100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.687 [2024-12-15 13:39:23.124109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.687 [2024-12-15 13:39:23.124115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.687 [2024-12-15 13:39:23.124125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.687 [2024-12-15 13:39:23.124132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.687 [2024-12-15 13:39:23.124140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.687 [2024-12-15 13:39:23.124147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.687 [2024-12-15 13:39:23.124161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.687 [2024-12-15 13:39:23.124168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.687 [2024-12-15 13:39:23.124177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.687 [2024-12-15 13:39:23.124185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.687 [2024-12-15 13:39:23.124194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:123680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.687 [2024-12-15 13:39:23.124201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.687 [2024-12-15 13:39:23.124210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.687 [2024-12-15 13:39:23.124217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.687 [2024-12-15 13:39:23.124225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.687 [2024-12-15 13:39:23.124232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.687 [2024-12-15 13:39:23.124241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:56088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.687 [2024-12-15 13:39:23.124249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.687 [2024-12-15 13:39:23.124257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.687 [2024-12-15 13:39:23.124264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.687 [2024-12-15 13:39:23.124273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.687 [2024-12-15 13:39:23.124280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.687 [2024-12-15 13:39:23.124289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.687 [2024-12-15 13:39:23.124296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.687 [2024-12-15 13:39:23.124311] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd96780 is same with the state(5) to be set 00:25:17.687 [2024-12-15 13:39:23.124321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:17.687 [2024-12-15 13:39:23.124327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:17.687 [2024-12-15 13:39:23.124334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59104 len:8 PRP1 0x0 PRP2 0x0 00:25:17.687 [2024-12-15 13:39:23.124341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.687 [2024-12-15 13:39:23.124405] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd96780 was disconnected and freed. reset controller. 00:25:17.687 [2024-12-15 13:39:23.124770] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.687 [2024-12-15 13:39:23.124856] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd118c0 (9): Bad file descriptor 00:25:17.687 [2024-12-15 13:39:23.124999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.687 [2024-12-15 13:39:23.125041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.687 [2024-12-15 13:39:23.125056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd118c0 with addr=10.0.0.2, port=4420 00:25:17.687 [2024-12-15 13:39:23.125065] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd118c0 is same with the state(5) to be set 00:25:17.687 [2024-12-15 13:39:23.125081] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd118c0 (9): Bad file descriptor 00:25:17.687 [2024-12-15 13:39:23.125095] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.687 [2024-12-15 13:39:23.125110] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.687 [2024-12-15 13:39:23.125120] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.687 [2024-12-15 13:39:23.125137] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.687 [2024-12-15 13:39:23.125146] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.687 13:39:23 -- host/timeout.sh@128 -- # wait 101002 00:25:19.592 [2024-12-15 13:39:25.125362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.592 [2024-12-15 13:39:25.125476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.592 [2024-12-15 13:39:25.125494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd118c0 with addr=10.0.0.2, port=4420 00:25:19.592 [2024-12-15 13:39:25.125507] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd118c0 is same with the state(5) to be set 00:25:19.592 [2024-12-15 13:39:25.125545] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd118c0 (9): Bad file descriptor 00:25:19.593 [2024-12-15 13:39:25.125567] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.593 [2024-12-15 13:39:25.125577] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.593 [2024-12-15 13:39:25.125623] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.593 [2024-12-15 13:39:25.125667] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.593 [2024-12-15 13:39:25.125679] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.494 [2024-12-15 13:39:27.125907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.494 [2024-12-15 13:39:27.126033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.494 [2024-12-15 13:39:27.126052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd118c0 with addr=10.0.0.2, port=4420 00:25:21.494 [2024-12-15 13:39:27.126065] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd118c0 is same with the state(5) to be set 00:25:21.494 [2024-12-15 13:39:27.126097] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd118c0 (9): Bad file descriptor 00:25:21.494 [2024-12-15 13:39:27.126129] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.494 [2024-12-15 13:39:27.126140] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.494 [2024-12-15 13:39:27.126150] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.494 [2024-12-15 13:39:27.126180] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.494 [2024-12-15 13:39:27.126191] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:24.027 [2024-12-15 13:39:29.126264] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:24.027 [2024-12-15 13:39:29.126343] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:24.027 [2024-12-15 13:39:29.126355] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:24.027 [2024-12-15 13:39:29.126367] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:24.027 [2024-12-15 13:39:29.126398] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:24.597 00:25:24.597 Latency(us) 00:25:24.597 [2024-12-15T13:39:30.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.597 [2024-12-15T13:39:30.287Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:25:24.597 NVMe0n1 : 8.10 2930.95 11.45 15.80 0.00 43380.89 2844.86 7015926.69 00:25:24.597 [2024-12-15T13:39:30.287Z] =================================================================================================================== 00:25:24.597 [2024-12-15T13:39:30.287Z] Total : 2930.95 11.45 15.80 0.00 43380.89 2844.86 7015926.69 00:25:24.597 0 00:25:24.597 13:39:30 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:24.597 Attaching 5 probes... 00:25:24.597 1289.994885: reset bdev controller NVMe0 00:25:24.597 1290.152058: reconnect bdev controller NVMe0 00:25:24.597 3290.416164: reconnect delay bdev controller NVMe0 00:25:24.597 3290.454089: reconnect bdev controller NVMe0 00:25:24.597 5290.957758: reconnect delay bdev controller NVMe0 00:25:24.597 5290.987887: reconnect bdev controller NVMe0 00:25:24.597 7291.469931: reconnect delay bdev controller NVMe0 00:25:24.597 7291.497346: reconnect bdev controller NVMe0 00:25:24.597 13:39:30 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:25:24.597 13:39:30 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:25:24.597 13:39:30 -- host/timeout.sh@136 -- # kill 100949 00:25:24.597 13:39:30 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:24.597 13:39:30 -- host/timeout.sh@139 -- # killprocess 100925 00:25:24.597 13:39:30 -- common/autotest_common.sh@936 -- # '[' -z 100925 ']' 00:25:24.597 13:39:30 -- common/autotest_common.sh@940 -- # kill -0 100925 00:25:24.598 13:39:30 -- common/autotest_common.sh@941 -- # uname 00:25:24.598 13:39:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:24.598 13:39:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100925 00:25:24.598 13:39:30 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:24.598 13:39:30 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:24.598 13:39:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100925' 00:25:24.598 killing process with pid 100925 00:25:24.598 13:39:30 -- common/autotest_common.sh@955 -- # kill 100925 00:25:24.598 Received shutdown signal, test time was about 8.165862 seconds 00:25:24.598 00:25:24.598 Latency(us) 00:25:24.598 [2024-12-15T13:39:30.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.598 [2024-12-15T13:39:30.288Z] =================================================================================================================== 00:25:24.598 [2024-12-15T13:39:30.288Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:24.598 13:39:30 -- common/autotest_common.sh@960 -- # wait 100925 00:25:24.869 13:39:30 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:25.146 13:39:30 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:25:25.146 13:39:30 -- host/timeout.sh@145 -- # nvmftestfini 00:25:25.146 13:39:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:25.146 13:39:30 -- nvmf/common.sh@116 -- # sync 00:25:25.146 13:39:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:25.146 13:39:30 -- nvmf/common.sh@119 -- # set +e 00:25:25.146 13:39:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:25.146 13:39:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:25.146 rmmod nvme_tcp 00:25:25.146 rmmod nvme_fabrics 00:25:25.415 rmmod nvme_keyring 00:25:25.415 13:39:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:25.415 13:39:30 -- nvmf/common.sh@123 -- # set -e 00:25:25.415 13:39:30 -- nvmf/common.sh@124 -- # return 0 00:25:25.415 13:39:30 -- nvmf/common.sh@477 -- # '[' -n 100338 ']' 00:25:25.415 13:39:30 -- nvmf/common.sh@478 -- # killprocess 100338 00:25:25.415 13:39:30 -- common/autotest_common.sh@936 -- # '[' -z 100338 ']' 00:25:25.415 13:39:30 -- common/autotest_common.sh@940 -- # kill -0 100338 00:25:25.415 13:39:30 -- common/autotest_common.sh@941 -- # uname 00:25:25.415 13:39:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:25.415 13:39:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100338 00:25:25.415 13:39:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:25.415 13:39:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:25.415 13:39:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100338' 00:25:25.415 killing process with pid 100338 00:25:25.415 13:39:30 -- common/autotest_common.sh@955 -- # kill 100338 00:25:25.415 13:39:30 -- common/autotest_common.sh@960 -- # wait 100338 00:25:25.673 13:39:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:25.673 13:39:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:25.673 13:39:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:25.673 13:39:31 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:25.673 13:39:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:25.673 13:39:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.673 13:39:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:25.673 13:39:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.673 13:39:31 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:25.673 00:25:25.673 real 0m47.077s 00:25:25.673 user 2m18.007s 00:25:25.673 sys 0m5.190s 00:25:25.673 13:39:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:25.673 13:39:31 -- common/autotest_common.sh@10 -- # set +x 00:25:25.673 ************************************ 00:25:25.673 END TEST nvmf_timeout 00:25:25.673 ************************************ 00:25:25.673 13:39:31 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:25:25.673 13:39:31 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:25:25.673 13:39:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:25.673 13:39:31 -- common/autotest_common.sh@10 -- # set +x 00:25:25.673 13:39:31 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:25:25.673 00:25:25.673 real 17m28.032s 00:25:25.673 user 55m34.384s 00:25:25.673 sys 3m53.110s 00:25:25.673 13:39:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:25.673 13:39:31 -- common/autotest_common.sh@10 -- # set +x 00:25:25.673 ************************************ 00:25:25.673 END TEST nvmf_tcp 00:25:25.673 ************************************ 00:25:25.673 13:39:31 -- spdk/autotest.sh@283 -- # [[ 0 -eq 0 ]] 00:25:25.673 13:39:31 -- spdk/autotest.sh@284 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:25.673 13:39:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:25.673 13:39:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:25.673 13:39:31 -- common/autotest_common.sh@10 -- # set +x 00:25:25.673 ************************************ 00:25:25.673 START TEST spdkcli_nvmf_tcp 00:25:25.673 ************************************ 00:25:25.673 13:39:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:25.673 * Looking for test storage... 00:25:25.932 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:25.932 13:39:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:25.932 13:39:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:25.932 13:39:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:25.932 13:39:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:25.932 13:39:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:25.932 13:39:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:25.932 13:39:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:25.932 13:39:31 -- scripts/common.sh@335 -- # IFS=.-: 00:25:25.932 13:39:31 -- scripts/common.sh@335 -- # read -ra ver1 00:25:25.932 13:39:31 -- scripts/common.sh@336 -- # IFS=.-: 00:25:25.932 13:39:31 -- scripts/common.sh@336 -- # read -ra ver2 00:25:25.932 13:39:31 -- scripts/common.sh@337 -- # local 'op=<' 00:25:25.932 13:39:31 -- scripts/common.sh@339 -- # ver1_l=2 00:25:25.932 13:39:31 -- scripts/common.sh@340 -- # ver2_l=1 00:25:25.932 13:39:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:25.932 13:39:31 -- scripts/common.sh@343 -- # case "$op" in 00:25:25.932 13:39:31 -- scripts/common.sh@344 -- # : 1 00:25:25.932 13:39:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:25.932 13:39:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:25.932 13:39:31 -- scripts/common.sh@364 -- # decimal 1 00:25:25.932 13:39:31 -- scripts/common.sh@352 -- # local d=1 00:25:25.932 13:39:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:25.932 13:39:31 -- scripts/common.sh@354 -- # echo 1 00:25:25.932 13:39:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:25.932 13:39:31 -- scripts/common.sh@365 -- # decimal 2 00:25:25.932 13:39:31 -- scripts/common.sh@352 -- # local d=2 00:25:25.932 13:39:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:25.932 13:39:31 -- scripts/common.sh@354 -- # echo 2 00:25:25.932 13:39:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:25.932 13:39:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:25.932 13:39:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:25.932 13:39:31 -- scripts/common.sh@367 -- # return 0 00:25:25.932 13:39:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:25.932 13:39:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:25.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.932 --rc genhtml_branch_coverage=1 00:25:25.932 --rc genhtml_function_coverage=1 00:25:25.932 --rc genhtml_legend=1 00:25:25.932 --rc geninfo_all_blocks=1 00:25:25.933 --rc geninfo_unexecuted_blocks=1 00:25:25.933 00:25:25.933 ' 00:25:25.933 13:39:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:25.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.933 --rc genhtml_branch_coverage=1 00:25:25.933 --rc genhtml_function_coverage=1 00:25:25.933 --rc genhtml_legend=1 00:25:25.933 --rc geninfo_all_blocks=1 00:25:25.933 --rc geninfo_unexecuted_blocks=1 00:25:25.933 00:25:25.933 ' 00:25:25.933 13:39:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:25.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.933 --rc genhtml_branch_coverage=1 00:25:25.933 --rc genhtml_function_coverage=1 00:25:25.933 --rc genhtml_legend=1 00:25:25.933 --rc geninfo_all_blocks=1 00:25:25.933 --rc geninfo_unexecuted_blocks=1 00:25:25.933 00:25:25.933 ' 00:25:25.933 13:39:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:25.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.933 --rc genhtml_branch_coverage=1 00:25:25.933 --rc genhtml_function_coverage=1 00:25:25.933 --rc genhtml_legend=1 00:25:25.933 --rc geninfo_all_blocks=1 00:25:25.933 --rc geninfo_unexecuted_blocks=1 00:25:25.933 00:25:25.933 ' 00:25:25.933 13:39:31 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:25:25.933 13:39:31 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:25:25.933 13:39:31 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:25:25.933 13:39:31 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:25.933 13:39:31 -- nvmf/common.sh@7 -- # uname -s 00:25:25.933 13:39:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:25.933 13:39:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:25.933 13:39:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:25.933 13:39:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:25.933 13:39:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:25.933 13:39:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:25.933 13:39:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:25.933 13:39:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:25.933 13:39:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:25.933 13:39:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:25.933 13:39:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:25:25.933 13:39:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:25:25.933 13:39:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:25.933 13:39:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:25.933 13:39:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:25.933 13:39:31 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:25.933 13:39:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:25.933 13:39:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:25.933 13:39:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:25.933 13:39:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.933 13:39:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.933 13:39:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.933 13:39:31 -- paths/export.sh@5 -- # export PATH 00:25:25.933 13:39:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.933 13:39:31 -- nvmf/common.sh@46 -- # : 0 00:25:25.933 13:39:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:25.933 13:39:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:25.933 13:39:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:25.933 13:39:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:25.933 13:39:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:25.933 13:39:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:25.933 13:39:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:25.933 13:39:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:25.933 13:39:31 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:25.933 13:39:31 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:25.933 13:39:31 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:25.933 13:39:31 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:25.933 13:39:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:25.933 13:39:31 -- common/autotest_common.sh@10 -- # set +x 00:25:25.933 13:39:31 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:25.933 13:39:31 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=101234 00:25:25.933 13:39:31 -- spdkcli/common.sh@34 -- # waitforlisten 101234 00:25:25.933 13:39:31 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:25.933 13:39:31 -- common/autotest_common.sh@829 -- # '[' -z 101234 ']' 00:25:25.933 13:39:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:25.933 13:39:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:25.933 13:39:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:25.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:25.933 13:39:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:25.933 13:39:31 -- common/autotest_common.sh@10 -- # set +x 00:25:25.933 [2024-12-15 13:39:31.555258] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:25.933 [2024-12-15 13:39:31.555352] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101234 ] 00:25:26.192 [2024-12-15 13:39:31.693833] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:26.192 [2024-12-15 13:39:31.754659] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:26.192 [2024-12-15 13:39:31.755094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:26.192 [2024-12-15 13:39:31.755100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.126 13:39:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:27.126 13:39:32 -- common/autotest_common.sh@862 -- # return 0 00:25:27.126 13:39:32 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:27.126 13:39:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:27.126 13:39:32 -- common/autotest_common.sh@10 -- # set +x 00:25:27.126 13:39:32 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:27.126 13:39:32 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:27.126 13:39:32 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:27.126 13:39:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:27.126 13:39:32 -- common/autotest_common.sh@10 -- # set +x 00:25:27.126 13:39:32 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:27.126 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:27.126 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:27.126 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:27.126 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:27.126 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:27.126 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:27.126 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:27.126 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:27.126 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:27.126 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:27.126 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:27.126 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:27.126 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:27.127 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:27.127 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:27.127 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:27.127 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:27.127 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:27.127 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:27.127 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:27.127 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:27.127 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:27.127 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:27.127 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:27.127 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:27.127 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:27.127 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:27.127 ' 00:25:27.693 [2024-12-15 13:39:33.077561] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:30.223 [2024-12-15 13:39:35.337025] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:31.158 [2024-12-15 13:39:36.626400] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:33.689 [2024-12-15 13:39:39.029384] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:35.591 [2024-12-15 13:39:41.096101] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:37.493 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:37.493 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:37.493 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:37.493 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:37.493 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:37.493 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:37.493 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:37.493 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:37.493 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:37.493 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:37.493 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:37.493 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:37.493 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:37.493 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:37.493 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:37.493 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:37.493 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:37.493 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:37.493 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:37.493 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:37.493 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:37.493 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:37.494 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:37.494 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:37.494 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:37.494 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:37.494 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:37.494 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:37.494 13:39:42 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:37.494 13:39:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:37.494 13:39:42 -- common/autotest_common.sh@10 -- # set +x 00:25:37.494 13:39:42 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:37.494 13:39:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:37.494 13:39:42 -- common/autotest_common.sh@10 -- # set +x 00:25:37.494 13:39:42 -- spdkcli/nvmf.sh@69 -- # check_match 00:25:37.494 13:39:42 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:25:37.752 13:39:43 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:37.752 13:39:43 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:37.752 13:39:43 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:37.752 13:39:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:37.752 13:39:43 -- common/autotest_common.sh@10 -- # set +x 00:25:37.752 13:39:43 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:37.752 13:39:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:37.752 13:39:43 -- common/autotest_common.sh@10 -- # set +x 00:25:37.752 13:39:43 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:37.752 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:37.753 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:37.753 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:37.753 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:37.753 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:37.753 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:37.753 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:37.753 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:37.753 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:37.753 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:37.753 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:37.753 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:37.753 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:37.753 ' 00:25:43.037 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:43.037 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:43.037 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:43.037 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:43.037 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:43.037 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:43.037 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:43.037 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:43.037 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:43.037 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:43.037 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:43.037 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:43.037 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:43.037 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:43.296 13:39:48 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:43.296 13:39:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:43.296 13:39:48 -- common/autotest_common.sh@10 -- # set +x 00:25:43.296 13:39:48 -- spdkcli/nvmf.sh@90 -- # killprocess 101234 00:25:43.296 13:39:48 -- common/autotest_common.sh@936 -- # '[' -z 101234 ']' 00:25:43.296 13:39:48 -- common/autotest_common.sh@940 -- # kill -0 101234 00:25:43.296 13:39:48 -- common/autotest_common.sh@941 -- # uname 00:25:43.296 13:39:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:43.296 13:39:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101234 00:25:43.296 13:39:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:43.296 13:39:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:43.296 13:39:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101234' 00:25:43.296 killing process with pid 101234 00:25:43.296 13:39:48 -- common/autotest_common.sh@955 -- # kill 101234 00:25:43.296 [2024-12-15 13:39:48.919307] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:43.296 13:39:48 -- common/autotest_common.sh@960 -- # wait 101234 00:25:43.865 13:39:49 -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:43.865 13:39:49 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:43.865 13:39:49 -- spdkcli/common.sh@13 -- # '[' -n 101234 ']' 00:25:43.865 13:39:49 -- spdkcli/common.sh@14 -- # killprocess 101234 00:25:43.865 Process with pid 101234 is not found 00:25:43.865 13:39:49 -- common/autotest_common.sh@936 -- # '[' -z 101234 ']' 00:25:43.865 13:39:49 -- common/autotest_common.sh@940 -- # kill -0 101234 00:25:43.865 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (101234) - No such process 00:25:43.865 13:39:49 -- common/autotest_common.sh@963 -- # echo 'Process with pid 101234 is not found' 00:25:43.865 13:39:49 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:43.865 13:39:49 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:43.865 13:39:49 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:43.865 ************************************ 00:25:43.865 END TEST spdkcli_nvmf_tcp 00:25:43.865 ************************************ 00:25:43.865 00:25:43.865 real 0m17.976s 00:25:43.865 user 0m38.804s 00:25:43.865 sys 0m0.946s 00:25:43.865 13:39:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:43.865 13:39:49 -- common/autotest_common.sh@10 -- # set +x 00:25:43.865 13:39:49 -- spdk/autotest.sh@285 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:43.865 13:39:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:43.865 13:39:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:43.865 13:39:49 -- common/autotest_common.sh@10 -- # set +x 00:25:43.865 ************************************ 00:25:43.865 START TEST nvmf_identify_passthru 00:25:43.865 ************************************ 00:25:43.865 13:39:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:43.865 * Looking for test storage... 00:25:43.865 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:43.865 13:39:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:43.865 13:39:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:43.865 13:39:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:43.865 13:39:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:43.865 13:39:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:43.865 13:39:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:43.865 13:39:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:43.865 13:39:49 -- scripts/common.sh@335 -- # IFS=.-: 00:25:43.865 13:39:49 -- scripts/common.sh@335 -- # read -ra ver1 00:25:43.865 13:39:49 -- scripts/common.sh@336 -- # IFS=.-: 00:25:43.865 13:39:49 -- scripts/common.sh@336 -- # read -ra ver2 00:25:43.865 13:39:49 -- scripts/common.sh@337 -- # local 'op=<' 00:25:43.865 13:39:49 -- scripts/common.sh@339 -- # ver1_l=2 00:25:43.865 13:39:49 -- scripts/common.sh@340 -- # ver2_l=1 00:25:43.865 13:39:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:43.865 13:39:49 -- scripts/common.sh@343 -- # case "$op" in 00:25:43.865 13:39:49 -- scripts/common.sh@344 -- # : 1 00:25:43.865 13:39:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:43.865 13:39:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:43.865 13:39:49 -- scripts/common.sh@364 -- # decimal 1 00:25:43.865 13:39:49 -- scripts/common.sh@352 -- # local d=1 00:25:43.865 13:39:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:43.865 13:39:49 -- scripts/common.sh@354 -- # echo 1 00:25:43.865 13:39:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:43.865 13:39:49 -- scripts/common.sh@365 -- # decimal 2 00:25:43.865 13:39:49 -- scripts/common.sh@352 -- # local d=2 00:25:43.865 13:39:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:43.865 13:39:49 -- scripts/common.sh@354 -- # echo 2 00:25:43.865 13:39:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:43.865 13:39:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:43.865 13:39:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:43.865 13:39:49 -- scripts/common.sh@367 -- # return 0 00:25:43.865 13:39:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:43.865 13:39:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:43.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.865 --rc genhtml_branch_coverage=1 00:25:43.865 --rc genhtml_function_coverage=1 00:25:43.865 --rc genhtml_legend=1 00:25:43.865 --rc geninfo_all_blocks=1 00:25:43.865 --rc geninfo_unexecuted_blocks=1 00:25:43.865 00:25:43.865 ' 00:25:43.865 13:39:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:43.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.865 --rc genhtml_branch_coverage=1 00:25:43.865 --rc genhtml_function_coverage=1 00:25:43.865 --rc genhtml_legend=1 00:25:43.865 --rc geninfo_all_blocks=1 00:25:43.865 --rc geninfo_unexecuted_blocks=1 00:25:43.865 00:25:43.865 ' 00:25:43.865 13:39:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:43.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.865 --rc genhtml_branch_coverage=1 00:25:43.865 --rc genhtml_function_coverage=1 00:25:43.865 --rc genhtml_legend=1 00:25:43.865 --rc geninfo_all_blocks=1 00:25:43.865 --rc geninfo_unexecuted_blocks=1 00:25:43.865 00:25:43.865 ' 00:25:43.865 13:39:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:43.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.865 --rc genhtml_branch_coverage=1 00:25:43.865 --rc genhtml_function_coverage=1 00:25:43.865 --rc genhtml_legend=1 00:25:43.865 --rc geninfo_all_blocks=1 00:25:43.865 --rc geninfo_unexecuted_blocks=1 00:25:43.865 00:25:43.865 ' 00:25:43.865 13:39:49 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:43.865 13:39:49 -- nvmf/common.sh@7 -- # uname -s 00:25:43.865 13:39:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:43.865 13:39:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:43.865 13:39:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:43.865 13:39:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:43.865 13:39:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:43.865 13:39:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:43.865 13:39:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:43.865 13:39:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:43.865 13:39:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:43.865 13:39:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:43.865 13:39:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:25:43.865 13:39:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:25:43.865 13:39:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:43.865 13:39:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:43.865 13:39:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:43.865 13:39:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:43.865 13:39:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:43.865 13:39:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:43.865 13:39:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:43.865 13:39:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.866 13:39:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.866 13:39:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.866 13:39:49 -- paths/export.sh@5 -- # export PATH 00:25:43.866 13:39:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.866 13:39:49 -- nvmf/common.sh@46 -- # : 0 00:25:43.866 13:39:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:43.866 13:39:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:43.866 13:39:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:43.866 13:39:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:43.866 13:39:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:43.866 13:39:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:43.866 13:39:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:43.866 13:39:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:43.866 13:39:49 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:43.866 13:39:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:43.866 13:39:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:43.866 13:39:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:43.866 13:39:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.866 13:39:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.866 13:39:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.866 13:39:49 -- paths/export.sh@5 -- # export PATH 00:25:43.866 13:39:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.866 13:39:49 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:43.866 13:39:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:43.866 13:39:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:43.866 13:39:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:43.866 13:39:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:43.866 13:39:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:43.866 13:39:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.866 13:39:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:43.866 13:39:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.866 13:39:49 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:43.866 13:39:49 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:43.866 13:39:49 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:43.866 13:39:49 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:43.866 13:39:49 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:43.866 13:39:49 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:43.866 13:39:49 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:43.866 13:39:49 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:43.866 13:39:49 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:43.866 13:39:49 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:43.866 13:39:49 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:43.866 13:39:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:43.866 13:39:49 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:43.866 13:39:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:43.866 13:39:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:43.866 13:39:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:43.866 13:39:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:43.866 13:39:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:43.866 13:39:49 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:43.866 13:39:49 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:43.866 Cannot find device "nvmf_tgt_br" 00:25:43.866 13:39:49 -- nvmf/common.sh@154 -- # true 00:25:43.866 13:39:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:43.866 Cannot find device "nvmf_tgt_br2" 00:25:43.866 13:39:49 -- nvmf/common.sh@155 -- # true 00:25:43.866 13:39:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:44.125 13:39:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:44.125 Cannot find device "nvmf_tgt_br" 00:25:44.125 13:39:49 -- nvmf/common.sh@157 -- # true 00:25:44.125 13:39:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:44.125 Cannot find device "nvmf_tgt_br2" 00:25:44.125 13:39:49 -- nvmf/common.sh@158 -- # true 00:25:44.125 13:39:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:44.125 13:39:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:44.125 13:39:49 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:44.125 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:44.125 13:39:49 -- nvmf/common.sh@161 -- # true 00:25:44.125 13:39:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:44.125 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:44.125 13:39:49 -- nvmf/common.sh@162 -- # true 00:25:44.125 13:39:49 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:44.125 13:39:49 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:44.125 13:39:49 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:44.125 13:39:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:44.125 13:39:49 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:44.125 13:39:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:44.125 13:39:49 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:44.125 13:39:49 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:44.125 13:39:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:44.125 13:39:49 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:44.125 13:39:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:44.125 13:39:49 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:44.125 13:39:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:44.125 13:39:49 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:44.125 13:39:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:44.125 13:39:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:44.125 13:39:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:44.125 13:39:49 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:44.125 13:39:49 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:44.125 13:39:49 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:44.125 13:39:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:44.125 13:39:49 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:44.125 13:39:49 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:44.125 13:39:49 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:44.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:44.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:25:44.125 00:25:44.125 --- 10.0.0.2 ping statistics --- 00:25:44.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:44.125 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:25:44.125 13:39:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:44.384 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:44.384 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:25:44.384 00:25:44.384 --- 10.0.0.3 ping statistics --- 00:25:44.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:44.384 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:25:44.384 13:39:49 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:44.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:44.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:25:44.384 00:25:44.384 --- 10.0.0.1 ping statistics --- 00:25:44.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:44.384 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:25:44.384 13:39:49 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:44.384 13:39:49 -- nvmf/common.sh@421 -- # return 0 00:25:44.384 13:39:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:44.384 13:39:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:44.384 13:39:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:44.384 13:39:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:44.384 13:39:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:44.384 13:39:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:44.384 13:39:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:44.384 13:39:49 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:44.384 13:39:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:44.384 13:39:49 -- common/autotest_common.sh@10 -- # set +x 00:25:44.384 13:39:49 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:44.384 13:39:49 -- common/autotest_common.sh@1519 -- # bdfs=() 00:25:44.384 13:39:49 -- common/autotest_common.sh@1519 -- # local bdfs 00:25:44.384 13:39:49 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:25:44.384 13:39:49 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:25:44.384 13:39:49 -- common/autotest_common.sh@1508 -- # bdfs=() 00:25:44.384 13:39:49 -- common/autotest_common.sh@1508 -- # local bdfs 00:25:44.384 13:39:49 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:44.384 13:39:49 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:44.384 13:39:49 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:25:44.384 13:39:49 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:25:44.384 13:39:49 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:25:44.384 13:39:49 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:25:44.384 13:39:49 -- target/identify_passthru.sh@16 -- # bdf=0000:00:06.0 00:25:44.384 13:39:49 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:06.0 ']' 00:25:44.384 13:39:49 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:44.384 13:39:49 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:44.384 13:39:49 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:44.643 13:39:50 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:25:44.643 13:39:50 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:44.643 13:39:50 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:44.643 13:39:50 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:44.643 13:39:50 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:25:44.643 13:39:50 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:44.643 13:39:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:44.643 13:39:50 -- common/autotest_common.sh@10 -- # set +x 00:25:44.643 13:39:50 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:44.643 13:39:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:44.643 13:39:50 -- common/autotest_common.sh@10 -- # set +x 00:25:44.643 13:39:50 -- target/identify_passthru.sh@31 -- # nvmfpid=101735 00:25:44.643 13:39:50 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:44.643 13:39:50 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:44.643 13:39:50 -- target/identify_passthru.sh@35 -- # waitforlisten 101735 00:25:44.643 13:39:50 -- common/autotest_common.sh@829 -- # '[' -z 101735 ']' 00:25:44.643 13:39:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:44.643 13:39:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:44.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:44.643 13:39:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:44.643 13:39:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:44.643 13:39:50 -- common/autotest_common.sh@10 -- # set +x 00:25:44.902 [2024-12-15 13:39:50.376791] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:44.902 [2024-12-15 13:39:50.376886] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:44.902 [2024-12-15 13:39:50.519727] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:45.161 [2024-12-15 13:39:50.609152] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:45.161 [2024-12-15 13:39:50.609355] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:45.161 [2024-12-15 13:39:50.609373] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:45.161 [2024-12-15 13:39:50.609385] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:45.161 [2024-12-15 13:39:50.609565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.161 [2024-12-15 13:39:50.610199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:45.161 [2024-12-15 13:39:50.610297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:45.161 [2024-12-15 13:39:50.610302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.728 13:39:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:45.728 13:39:51 -- common/autotest_common.sh@862 -- # return 0 00:25:45.728 13:39:51 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:45.728 13:39:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.728 13:39:51 -- common/autotest_common.sh@10 -- # set +x 00:25:45.728 13:39:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.728 13:39:51 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:45.728 13:39:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.728 13:39:51 -- common/autotest_common.sh@10 -- # set +x 00:25:45.728 [2024-12-15 13:39:51.389202] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:45.728 13:39:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.728 13:39:51 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:45.728 13:39:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.728 13:39:51 -- common/autotest_common.sh@10 -- # set +x 00:25:45.728 [2024-12-15 13:39:51.403360] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:45.987 13:39:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.987 13:39:51 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:45.987 13:39:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:45.987 13:39:51 -- common/autotest_common.sh@10 -- # set +x 00:25:45.987 13:39:51 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:25:45.987 13:39:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.987 13:39:51 -- common/autotest_common.sh@10 -- # set +x 00:25:45.987 Nvme0n1 00:25:45.987 13:39:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.987 13:39:51 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:45.987 13:39:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.987 13:39:51 -- common/autotest_common.sh@10 -- # set +x 00:25:45.987 13:39:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.987 13:39:51 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:45.987 13:39:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.987 13:39:51 -- common/autotest_common.sh@10 -- # set +x 00:25:45.987 13:39:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.987 13:39:51 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:45.987 13:39:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.987 13:39:51 -- common/autotest_common.sh@10 -- # set +x 00:25:45.987 [2024-12-15 13:39:51.546313] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:45.987 13:39:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.987 13:39:51 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:45.987 13:39:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.987 13:39:51 -- common/autotest_common.sh@10 -- # set +x 00:25:45.987 [2024-12-15 13:39:51.554066] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:45.987 [ 00:25:45.987 { 00:25:45.987 "allow_any_host": true, 00:25:45.987 "hosts": [], 00:25:45.987 "listen_addresses": [], 00:25:45.987 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:45.987 "subtype": "Discovery" 00:25:45.987 }, 00:25:45.987 { 00:25:45.987 "allow_any_host": true, 00:25:45.987 "hosts": [], 00:25:45.987 "listen_addresses": [ 00:25:45.987 { 00:25:45.987 "adrfam": "IPv4", 00:25:45.987 "traddr": "10.0.0.2", 00:25:45.987 "transport": "TCP", 00:25:45.987 "trsvcid": "4420", 00:25:45.987 "trtype": "TCP" 00:25:45.987 } 00:25:45.987 ], 00:25:45.987 "max_cntlid": 65519, 00:25:45.987 "max_namespaces": 1, 00:25:45.987 "min_cntlid": 1, 00:25:45.987 "model_number": "SPDK bdev Controller", 00:25:45.987 "namespaces": [ 00:25:45.987 { 00:25:45.987 "bdev_name": "Nvme0n1", 00:25:45.987 "name": "Nvme0n1", 00:25:45.987 "nguid": "F1B719479AB54D65AB933307884729BC", 00:25:45.987 "nsid": 1, 00:25:45.987 "uuid": "f1b71947-9ab5-4d65-ab93-3307884729bc" 00:25:45.987 } 00:25:45.987 ], 00:25:45.987 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:45.987 "serial_number": "SPDK00000000000001", 00:25:45.987 "subtype": "NVMe" 00:25:45.987 } 00:25:45.987 ] 00:25:45.988 13:39:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.988 13:39:51 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:45.988 13:39:51 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:45.988 13:39:51 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:46.247 13:39:51 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:25:46.247 13:39:51 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:46.247 13:39:51 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:46.247 13:39:51 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:46.506 13:39:52 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:25:46.506 13:39:52 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:25:46.506 13:39:52 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:25:46.506 13:39:52 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:46.506 13:39:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.506 13:39:52 -- common/autotest_common.sh@10 -- # set +x 00:25:46.506 13:39:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.506 13:39:52 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:46.506 13:39:52 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:46.506 13:39:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:46.506 13:39:52 -- nvmf/common.sh@116 -- # sync 00:25:46.506 13:39:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:46.506 13:39:52 -- nvmf/common.sh@119 -- # set +e 00:25:46.506 13:39:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:46.506 13:39:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:46.506 rmmod nvme_tcp 00:25:46.506 rmmod nvme_fabrics 00:25:46.506 rmmod nvme_keyring 00:25:46.506 13:39:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:46.506 13:39:52 -- nvmf/common.sh@123 -- # set -e 00:25:46.506 13:39:52 -- nvmf/common.sh@124 -- # return 0 00:25:46.506 13:39:52 -- nvmf/common.sh@477 -- # '[' -n 101735 ']' 00:25:46.506 13:39:52 -- nvmf/common.sh@478 -- # killprocess 101735 00:25:46.506 13:39:52 -- common/autotest_common.sh@936 -- # '[' -z 101735 ']' 00:25:46.506 13:39:52 -- common/autotest_common.sh@940 -- # kill -0 101735 00:25:46.506 13:39:52 -- common/autotest_common.sh@941 -- # uname 00:25:46.506 13:39:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:46.506 13:39:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101735 00:25:46.506 13:39:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:46.506 13:39:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:46.506 killing process with pid 101735 00:25:46.506 13:39:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101735' 00:25:46.506 13:39:52 -- common/autotest_common.sh@955 -- # kill 101735 00:25:46.506 [2024-12-15 13:39:52.144816] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:46.506 13:39:52 -- common/autotest_common.sh@960 -- # wait 101735 00:25:46.765 13:39:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:46.765 13:39:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:46.765 13:39:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:46.765 13:39:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:46.765 13:39:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:46.765 13:39:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.765 13:39:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:46.765 13:39:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.023 13:39:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:47.023 00:25:47.023 real 0m3.151s 00:25:47.023 user 0m7.547s 00:25:47.023 sys 0m0.884s 00:25:47.023 13:39:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:47.023 13:39:52 -- common/autotest_common.sh@10 -- # set +x 00:25:47.023 ************************************ 00:25:47.023 END TEST nvmf_identify_passthru 00:25:47.023 ************************************ 00:25:47.023 13:39:52 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:47.023 13:39:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:47.024 13:39:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:47.024 13:39:52 -- common/autotest_common.sh@10 -- # set +x 00:25:47.024 ************************************ 00:25:47.024 START TEST nvmf_dif 00:25:47.024 ************************************ 00:25:47.024 13:39:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:47.024 * Looking for test storage... 00:25:47.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:47.024 13:39:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:47.024 13:39:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:47.024 13:39:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:47.024 13:39:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:47.024 13:39:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:47.024 13:39:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:47.024 13:39:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:47.024 13:39:52 -- scripts/common.sh@335 -- # IFS=.-: 00:25:47.024 13:39:52 -- scripts/common.sh@335 -- # read -ra ver1 00:25:47.024 13:39:52 -- scripts/common.sh@336 -- # IFS=.-: 00:25:47.024 13:39:52 -- scripts/common.sh@336 -- # read -ra ver2 00:25:47.024 13:39:52 -- scripts/common.sh@337 -- # local 'op=<' 00:25:47.024 13:39:52 -- scripts/common.sh@339 -- # ver1_l=2 00:25:47.024 13:39:52 -- scripts/common.sh@340 -- # ver2_l=1 00:25:47.024 13:39:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:47.024 13:39:52 -- scripts/common.sh@343 -- # case "$op" in 00:25:47.024 13:39:52 -- scripts/common.sh@344 -- # : 1 00:25:47.024 13:39:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:47.024 13:39:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:47.024 13:39:52 -- scripts/common.sh@364 -- # decimal 1 00:25:47.024 13:39:52 -- scripts/common.sh@352 -- # local d=1 00:25:47.024 13:39:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:47.024 13:39:52 -- scripts/common.sh@354 -- # echo 1 00:25:47.024 13:39:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:47.024 13:39:52 -- scripts/common.sh@365 -- # decimal 2 00:25:47.024 13:39:52 -- scripts/common.sh@352 -- # local d=2 00:25:47.024 13:39:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:47.024 13:39:52 -- scripts/common.sh@354 -- # echo 2 00:25:47.024 13:39:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:47.024 13:39:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:47.024 13:39:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:47.024 13:39:52 -- scripts/common.sh@367 -- # return 0 00:25:47.024 13:39:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:47.024 13:39:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:47.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.024 --rc genhtml_branch_coverage=1 00:25:47.024 --rc genhtml_function_coverage=1 00:25:47.024 --rc genhtml_legend=1 00:25:47.024 --rc geninfo_all_blocks=1 00:25:47.024 --rc geninfo_unexecuted_blocks=1 00:25:47.024 00:25:47.024 ' 00:25:47.024 13:39:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:47.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.024 --rc genhtml_branch_coverage=1 00:25:47.024 --rc genhtml_function_coverage=1 00:25:47.024 --rc genhtml_legend=1 00:25:47.024 --rc geninfo_all_blocks=1 00:25:47.024 --rc geninfo_unexecuted_blocks=1 00:25:47.024 00:25:47.024 ' 00:25:47.024 13:39:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:47.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.024 --rc genhtml_branch_coverage=1 00:25:47.024 --rc genhtml_function_coverage=1 00:25:47.024 --rc genhtml_legend=1 00:25:47.024 --rc geninfo_all_blocks=1 00:25:47.024 --rc geninfo_unexecuted_blocks=1 00:25:47.024 00:25:47.024 ' 00:25:47.024 13:39:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:47.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.024 --rc genhtml_branch_coverage=1 00:25:47.024 --rc genhtml_function_coverage=1 00:25:47.024 --rc genhtml_legend=1 00:25:47.024 --rc geninfo_all_blocks=1 00:25:47.024 --rc geninfo_unexecuted_blocks=1 00:25:47.024 00:25:47.024 ' 00:25:47.024 13:39:52 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:47.024 13:39:52 -- nvmf/common.sh@7 -- # uname -s 00:25:47.024 13:39:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:47.024 13:39:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:47.024 13:39:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:47.024 13:39:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:47.024 13:39:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:47.024 13:39:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:47.024 13:39:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:47.024 13:39:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:47.024 13:39:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:47.024 13:39:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:47.024 13:39:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:25:47.024 13:39:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:25:47.024 13:39:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:47.024 13:39:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:47.024 13:39:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:47.024 13:39:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:47.024 13:39:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:47.024 13:39:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:47.024 13:39:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:47.024 13:39:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.024 13:39:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.024 13:39:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.024 13:39:52 -- paths/export.sh@5 -- # export PATH 00:25:47.024 13:39:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.024 13:39:52 -- nvmf/common.sh@46 -- # : 0 00:25:47.024 13:39:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:47.024 13:39:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:47.024 13:39:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:47.024 13:39:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:47.024 13:39:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:47.024 13:39:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:47.024 13:39:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:47.024 13:39:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:47.284 13:39:52 -- target/dif.sh@15 -- # NULL_META=16 00:25:47.284 13:39:52 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:47.284 13:39:52 -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:47.284 13:39:52 -- target/dif.sh@15 -- # NULL_DIF=1 00:25:47.284 13:39:52 -- target/dif.sh@135 -- # nvmftestinit 00:25:47.284 13:39:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:47.284 13:39:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:47.284 13:39:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:47.284 13:39:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:47.284 13:39:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:47.284 13:39:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.284 13:39:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:47.284 13:39:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.284 13:39:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:47.284 13:39:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:47.284 13:39:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:47.284 13:39:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:47.284 13:39:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:47.284 13:39:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:47.284 13:39:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:47.284 13:39:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:47.284 13:39:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:47.284 13:39:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:47.284 13:39:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:47.284 13:39:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:47.284 13:39:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:47.284 13:39:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:47.284 13:39:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:47.284 13:39:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:47.284 13:39:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:47.284 13:39:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:47.284 13:39:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:47.284 13:39:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:47.284 Cannot find device "nvmf_tgt_br" 00:25:47.284 13:39:52 -- nvmf/common.sh@154 -- # true 00:25:47.284 13:39:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:47.284 Cannot find device "nvmf_tgt_br2" 00:25:47.284 13:39:52 -- nvmf/common.sh@155 -- # true 00:25:47.284 13:39:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:47.284 13:39:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:47.284 Cannot find device "nvmf_tgt_br" 00:25:47.284 13:39:52 -- nvmf/common.sh@157 -- # true 00:25:47.284 13:39:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:47.284 Cannot find device "nvmf_tgt_br2" 00:25:47.284 13:39:52 -- nvmf/common.sh@158 -- # true 00:25:47.284 13:39:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:47.284 13:39:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:47.284 13:39:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:47.284 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:47.284 13:39:52 -- nvmf/common.sh@161 -- # true 00:25:47.284 13:39:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:47.284 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:47.284 13:39:52 -- nvmf/common.sh@162 -- # true 00:25:47.284 13:39:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:47.284 13:39:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:47.284 13:39:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:47.284 13:39:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:47.284 13:39:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:47.284 13:39:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:47.284 13:39:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:47.284 13:39:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:47.284 13:39:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:47.284 13:39:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:47.284 13:39:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:47.284 13:39:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:47.284 13:39:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:47.284 13:39:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:47.284 13:39:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:47.284 13:39:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:47.543 13:39:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:47.543 13:39:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:47.543 13:39:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:47.543 13:39:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:47.543 13:39:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:47.543 13:39:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:47.543 13:39:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:47.543 13:39:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:47.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:47.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:25:47.543 00:25:47.543 --- 10.0.0.2 ping statistics --- 00:25:47.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.543 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:25:47.543 13:39:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:47.543 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:47.543 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:25:47.543 00:25:47.543 --- 10.0.0.3 ping statistics --- 00:25:47.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.543 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:25:47.543 13:39:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:47.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:47.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:25:47.543 00:25:47.543 --- 10.0.0.1 ping statistics --- 00:25:47.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.543 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:25:47.543 13:39:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:47.543 13:39:53 -- nvmf/common.sh@421 -- # return 0 00:25:47.543 13:39:53 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:25:47.543 13:39:53 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:47.802 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:47.802 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:47.802 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:47.802 13:39:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:47.802 13:39:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:47.802 13:39:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:47.802 13:39:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:47.802 13:39:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:47.802 13:39:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:47.802 13:39:53 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:47.802 13:39:53 -- target/dif.sh@137 -- # nvmfappstart 00:25:47.802 13:39:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:47.802 13:39:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:47.802 13:39:53 -- common/autotest_common.sh@10 -- # set +x 00:25:47.802 13:39:53 -- nvmf/common.sh@469 -- # nvmfpid=102094 00:25:47.802 13:39:53 -- nvmf/common.sh@470 -- # waitforlisten 102094 00:25:47.802 13:39:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:47.802 13:39:53 -- common/autotest_common.sh@829 -- # '[' -z 102094 ']' 00:25:47.802 13:39:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:47.802 13:39:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:47.802 13:39:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:47.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:47.802 13:39:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:47.802 13:39:53 -- common/autotest_common.sh@10 -- # set +x 00:25:48.061 [2024-12-15 13:39:53.511211] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:48.061 [2024-12-15 13:39:53.511302] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:48.061 [2024-12-15 13:39:53.654513] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.320 [2024-12-15 13:39:53.759195] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:48.320 [2024-12-15 13:39:53.759386] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:48.320 [2024-12-15 13:39:53.759403] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:48.320 [2024-12-15 13:39:53.759414] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:48.320 [2024-12-15 13:39:53.759461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.888 13:39:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:48.888 13:39:54 -- common/autotest_common.sh@862 -- # return 0 00:25:48.888 13:39:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:48.888 13:39:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:48.888 13:39:54 -- common/autotest_common.sh@10 -- # set +x 00:25:48.888 13:39:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:48.888 13:39:54 -- target/dif.sh@139 -- # create_transport 00:25:48.888 13:39:54 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:48.888 13:39:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.888 13:39:54 -- common/autotest_common.sh@10 -- # set +x 00:25:48.888 [2024-12-15 13:39:54.553541] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:48.888 13:39:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.888 13:39:54 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:48.888 13:39:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:48.888 13:39:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:48.888 13:39:54 -- common/autotest_common.sh@10 -- # set +x 00:25:48.888 ************************************ 00:25:48.888 START TEST fio_dif_1_default 00:25:48.888 ************************************ 00:25:48.888 13:39:54 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:25:48.888 13:39:54 -- target/dif.sh@86 -- # create_subsystems 0 00:25:48.888 13:39:54 -- target/dif.sh@28 -- # local sub 00:25:48.888 13:39:54 -- target/dif.sh@30 -- # for sub in "$@" 00:25:48.888 13:39:54 -- target/dif.sh@31 -- # create_subsystem 0 00:25:48.888 13:39:54 -- target/dif.sh@18 -- # local sub_id=0 00:25:48.888 13:39:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:48.888 13:39:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.888 13:39:54 -- common/autotest_common.sh@10 -- # set +x 00:25:49.147 bdev_null0 00:25:49.147 13:39:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.147 13:39:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:49.147 13:39:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.147 13:39:54 -- common/autotest_common.sh@10 -- # set +x 00:25:49.147 13:39:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.147 13:39:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:49.147 13:39:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.147 13:39:54 -- common/autotest_common.sh@10 -- # set +x 00:25:49.147 13:39:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.147 13:39:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:49.147 13:39:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.147 13:39:54 -- common/autotest_common.sh@10 -- # set +x 00:25:49.147 [2024-12-15 13:39:54.597769] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:49.147 13:39:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.147 13:39:54 -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:49.147 13:39:54 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:49.147 13:39:54 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:49.147 13:39:54 -- nvmf/common.sh@520 -- # config=() 00:25:49.147 13:39:54 -- nvmf/common.sh@520 -- # local subsystem config 00:25:49.147 13:39:54 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:49.147 13:39:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:49.147 13:39:54 -- target/dif.sh@82 -- # gen_fio_conf 00:25:49.147 13:39:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:49.147 { 00:25:49.147 "params": { 00:25:49.147 "name": "Nvme$subsystem", 00:25:49.147 "trtype": "$TEST_TRANSPORT", 00:25:49.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:49.147 "adrfam": "ipv4", 00:25:49.147 "trsvcid": "$NVMF_PORT", 00:25:49.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:49.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:49.147 "hdgst": ${hdgst:-false}, 00:25:49.147 "ddgst": ${ddgst:-false} 00:25:49.147 }, 00:25:49.147 "method": "bdev_nvme_attach_controller" 00:25:49.147 } 00:25:49.147 EOF 00:25:49.147 )") 00:25:49.147 13:39:54 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:49.147 13:39:54 -- target/dif.sh@54 -- # local file 00:25:49.147 13:39:54 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:49.147 13:39:54 -- target/dif.sh@56 -- # cat 00:25:49.147 13:39:54 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:49.147 13:39:54 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:49.147 13:39:54 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:49.147 13:39:54 -- nvmf/common.sh@542 -- # cat 00:25:49.147 13:39:54 -- common/autotest_common.sh@1330 -- # shift 00:25:49.147 13:39:54 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:49.147 13:39:54 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:49.147 13:39:54 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:49.147 13:39:54 -- target/dif.sh@72 -- # (( file <= files )) 00:25:49.147 13:39:54 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:49.147 13:39:54 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:49.147 13:39:54 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:49.148 13:39:54 -- nvmf/common.sh@544 -- # jq . 00:25:49.148 13:39:54 -- nvmf/common.sh@545 -- # IFS=, 00:25:49.148 13:39:54 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:49.148 "params": { 00:25:49.148 "name": "Nvme0", 00:25:49.148 "trtype": "tcp", 00:25:49.148 "traddr": "10.0.0.2", 00:25:49.148 "adrfam": "ipv4", 00:25:49.148 "trsvcid": "4420", 00:25:49.148 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:49.148 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:49.148 "hdgst": false, 00:25:49.148 "ddgst": false 00:25:49.148 }, 00:25:49.148 "method": "bdev_nvme_attach_controller" 00:25:49.148 }' 00:25:49.148 13:39:54 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:49.148 13:39:54 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:49.148 13:39:54 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:49.148 13:39:54 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:49.148 13:39:54 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:49.148 13:39:54 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:49.148 13:39:54 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:49.148 13:39:54 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:49.148 13:39:54 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:49.148 13:39:54 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:49.148 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:49.148 fio-3.35 00:25:49.148 Starting 1 thread 00:25:49.715 [2024-12-15 13:39:55.225559] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:49.715 [2024-12-15 13:39:55.225686] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:59.694 00:25:59.694 filename0: (groupid=0, jobs=1): err= 0: pid=102180: Sun Dec 15 13:40:05 2024 00:25:59.694 read: IOPS=4489, BW=17.5MiB/s (18.4MB/s)(175MiB/10001msec) 00:25:59.694 slat (usec): min=5, max=103, avg= 7.50, stdev= 3.57 00:25:59.694 clat (usec): min=305, max=42389, avg=868.32, stdev=4260.50 00:25:59.694 lat (usec): min=311, max=42398, avg=875.81, stdev=4260.60 00:25:59.694 clat percentiles (usec): 00:25:59.694 | 1.00th=[ 363], 5.00th=[ 367], 10.00th=[ 375], 20.00th=[ 383], 00:25:59.694 | 30.00th=[ 396], 40.00th=[ 404], 50.00th=[ 412], 60.00th=[ 420], 00:25:59.694 | 70.00th=[ 433], 80.00th=[ 445], 90.00th=[ 465], 95.00th=[ 494], 00:25:59.694 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:25:59.694 | 99.99th=[42206] 00:25:59.694 bw ( KiB/s): min= 5888, max=27840, per=100.00%, avg=18182.74, stdev=7413.31, samples=19 00:25:59.694 iops : min= 1472, max= 6960, avg=4545.68, stdev=1853.33, samples=19 00:25:59.695 lat (usec) : 500=95.87%, 750=2.99%, 1000=0.01% 00:25:59.695 lat (msec) : 2=0.01%, 4=0.01%, 50=1.11% 00:25:59.695 cpu : usr=89.28%, sys=8.83%, ctx=23, majf=0, minf=0 00:25:59.695 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:59.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.695 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:59.695 issued rwts: total=44895,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:59.695 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:59.695 00:25:59.695 Run status group 0 (all jobs): 00:25:59.695 READ: bw=17.5MiB/s (18.4MB/s), 17.5MiB/s-17.5MiB/s (18.4MB/s-18.4MB/s), io=175MiB (184MB), run=10001-10001msec 00:25:59.963 13:40:05 -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:59.963 13:40:05 -- target/dif.sh@43 -- # local sub 00:25:59.964 13:40:05 -- target/dif.sh@45 -- # for sub in "$@" 00:25:59.964 13:40:05 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:59.964 13:40:05 -- target/dif.sh@36 -- # local sub_id=0 00:25:59.964 13:40:05 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:59.964 13:40:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.964 13:40:05 -- common/autotest_common.sh@10 -- # set +x 00:25:59.964 13:40:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.964 13:40:05 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:59.964 13:40:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.964 13:40:05 -- common/autotest_common.sh@10 -- # set +x 00:25:59.964 13:40:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.964 00:25:59.964 real 0m10.987s 00:25:59.964 user 0m9.560s 00:25:59.964 sys 0m1.149s 00:25:59.964 13:40:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:59.964 13:40:05 -- common/autotest_common.sh@10 -- # set +x 00:25:59.964 ************************************ 00:25:59.964 END TEST fio_dif_1_default 00:25:59.964 ************************************ 00:25:59.964 13:40:05 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:59.964 13:40:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:59.964 13:40:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:59.964 13:40:05 -- common/autotest_common.sh@10 -- # set +x 00:25:59.964 ************************************ 00:25:59.964 START TEST fio_dif_1_multi_subsystems 00:25:59.964 ************************************ 00:25:59.964 13:40:05 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:25:59.964 13:40:05 -- target/dif.sh@92 -- # local files=1 00:25:59.964 13:40:05 -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:59.964 13:40:05 -- target/dif.sh@28 -- # local sub 00:25:59.964 13:40:05 -- target/dif.sh@30 -- # for sub in "$@" 00:25:59.964 13:40:05 -- target/dif.sh@31 -- # create_subsystem 0 00:25:59.964 13:40:05 -- target/dif.sh@18 -- # local sub_id=0 00:25:59.964 13:40:05 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:59.964 13:40:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.964 13:40:05 -- common/autotest_common.sh@10 -- # set +x 00:25:59.964 bdev_null0 00:25:59.964 13:40:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.964 13:40:05 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:59.964 13:40:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.964 13:40:05 -- common/autotest_common.sh@10 -- # set +x 00:25:59.964 13:40:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.964 13:40:05 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:59.964 13:40:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.964 13:40:05 -- common/autotest_common.sh@10 -- # set +x 00:25:59.964 13:40:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.964 13:40:05 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:59.964 13:40:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.964 13:40:05 -- common/autotest_common.sh@10 -- # set +x 00:25:59.964 [2024-12-15 13:40:05.638400] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:59.964 13:40:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.964 13:40:05 -- target/dif.sh@30 -- # for sub in "$@" 00:25:59.964 13:40:05 -- target/dif.sh@31 -- # create_subsystem 1 00:25:59.964 13:40:05 -- target/dif.sh@18 -- # local sub_id=1 00:25:59.964 13:40:05 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:59.964 13:40:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.964 13:40:05 -- common/autotest_common.sh@10 -- # set +x 00:26:00.238 bdev_null1 00:26:00.238 13:40:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.238 13:40:05 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:00.238 13:40:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.238 13:40:05 -- common/autotest_common.sh@10 -- # set +x 00:26:00.238 13:40:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.238 13:40:05 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:00.238 13:40:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.238 13:40:05 -- common/autotest_common.sh@10 -- # set +x 00:26:00.238 13:40:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.238 13:40:05 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:00.238 13:40:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.238 13:40:05 -- common/autotest_common.sh@10 -- # set +x 00:26:00.238 13:40:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.238 13:40:05 -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:00.238 13:40:05 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:00.238 13:40:05 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:00.238 13:40:05 -- nvmf/common.sh@520 -- # config=() 00:26:00.238 13:40:05 -- nvmf/common.sh@520 -- # local subsystem config 00:26:00.238 13:40:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:00.238 13:40:05 -- target/dif.sh@82 -- # gen_fio_conf 00:26:00.238 13:40:05 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:00.238 13:40:05 -- target/dif.sh@54 -- # local file 00:26:00.238 13:40:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:00.238 { 00:26:00.238 "params": { 00:26:00.238 "name": "Nvme$subsystem", 00:26:00.238 "trtype": "$TEST_TRANSPORT", 00:26:00.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:00.238 "adrfam": "ipv4", 00:26:00.238 "trsvcid": "$NVMF_PORT", 00:26:00.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:00.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:00.238 "hdgst": ${hdgst:-false}, 00:26:00.238 "ddgst": ${ddgst:-false} 00:26:00.238 }, 00:26:00.238 "method": "bdev_nvme_attach_controller" 00:26:00.238 } 00:26:00.238 EOF 00:26:00.238 )") 00:26:00.238 13:40:05 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:00.238 13:40:05 -- target/dif.sh@56 -- # cat 00:26:00.238 13:40:05 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:00.238 13:40:05 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:00.238 13:40:05 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:00.238 13:40:05 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:00.238 13:40:05 -- common/autotest_common.sh@1330 -- # shift 00:26:00.238 13:40:05 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:00.238 13:40:05 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:00.238 13:40:05 -- nvmf/common.sh@542 -- # cat 00:26:00.238 13:40:05 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:00.238 13:40:05 -- target/dif.sh@72 -- # (( file <= files )) 00:26:00.238 13:40:05 -- target/dif.sh@73 -- # cat 00:26:00.238 13:40:05 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:00.238 13:40:05 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:00.238 13:40:05 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:00.238 13:40:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:00.238 13:40:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:00.238 { 00:26:00.238 "params": { 00:26:00.238 "name": "Nvme$subsystem", 00:26:00.238 "trtype": "$TEST_TRANSPORT", 00:26:00.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:00.238 "adrfam": "ipv4", 00:26:00.238 "trsvcid": "$NVMF_PORT", 00:26:00.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:00.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:00.238 "hdgst": ${hdgst:-false}, 00:26:00.238 "ddgst": ${ddgst:-false} 00:26:00.238 }, 00:26:00.238 "method": "bdev_nvme_attach_controller" 00:26:00.238 } 00:26:00.238 EOF 00:26:00.238 )") 00:26:00.238 13:40:05 -- target/dif.sh@72 -- # (( file++ )) 00:26:00.238 13:40:05 -- target/dif.sh@72 -- # (( file <= files )) 00:26:00.238 13:40:05 -- nvmf/common.sh@542 -- # cat 00:26:00.238 13:40:05 -- nvmf/common.sh@544 -- # jq . 00:26:00.238 13:40:05 -- nvmf/common.sh@545 -- # IFS=, 00:26:00.238 13:40:05 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:00.238 "params": { 00:26:00.238 "name": "Nvme0", 00:26:00.238 "trtype": "tcp", 00:26:00.238 "traddr": "10.0.0.2", 00:26:00.238 "adrfam": "ipv4", 00:26:00.238 "trsvcid": "4420", 00:26:00.238 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:00.238 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:00.238 "hdgst": false, 00:26:00.238 "ddgst": false 00:26:00.238 }, 00:26:00.238 "method": "bdev_nvme_attach_controller" 00:26:00.238 },{ 00:26:00.238 "params": { 00:26:00.238 "name": "Nvme1", 00:26:00.238 "trtype": "tcp", 00:26:00.238 "traddr": "10.0.0.2", 00:26:00.238 "adrfam": "ipv4", 00:26:00.238 "trsvcid": "4420", 00:26:00.238 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:00.238 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:00.238 "hdgst": false, 00:26:00.238 "ddgst": false 00:26:00.238 }, 00:26:00.238 "method": "bdev_nvme_attach_controller" 00:26:00.238 }' 00:26:00.238 13:40:05 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:00.238 13:40:05 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:00.238 13:40:05 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:00.238 13:40:05 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:00.238 13:40:05 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:00.238 13:40:05 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:00.238 13:40:05 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:00.238 13:40:05 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:00.238 13:40:05 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:00.238 13:40:05 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:00.238 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:00.238 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:00.238 fio-3.35 00:26:00.238 Starting 2 threads 00:26:00.805 [2024-12-15 13:40:06.416501] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:00.805 [2024-12-15 13:40:06.416580] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:13.005 00:26:13.005 filename0: (groupid=0, jobs=1): err= 0: pid=102340: Sun Dec 15 13:40:16 2024 00:26:13.005 read: IOPS=287, BW=1150KiB/s (1177kB/s)(11.3MiB/10034msec) 00:26:13.005 slat (nsec): min=6215, max=59516, avg=8178.19, stdev=4063.79 00:26:13.005 clat (usec): min=373, max=41631, avg=13891.79, stdev=19032.50 00:26:13.005 lat (usec): min=380, max=41645, avg=13899.97, stdev=19032.38 00:26:13.005 clat percentiles (usec): 00:26:13.005 | 1.00th=[ 383], 5.00th=[ 396], 10.00th=[ 404], 20.00th=[ 416], 00:26:13.005 | 30.00th=[ 424], 40.00th=[ 441], 50.00th=[ 457], 60.00th=[ 494], 00:26:13.005 | 70.00th=[40633], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:26:13.005 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:26:13.005 | 99.99th=[41681] 00:26:13.005 bw ( KiB/s): min= 768, max= 1920, per=49.72%, avg=1152.05, stdev=298.15, samples=20 00:26:13.005 iops : min= 192, max= 480, avg=288.00, stdev=74.55, samples=20 00:26:13.005 lat (usec) : 500=60.71%, 750=4.54%, 1000=1.32% 00:26:13.005 lat (msec) : 4=0.14%, 50=33.29% 00:26:13.005 cpu : usr=96.02%, sys=3.45%, ctx=21, majf=0, minf=7 00:26:13.005 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:13.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.005 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.005 issued rwts: total=2884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.005 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:13.005 filename1: (groupid=0, jobs=1): err= 0: pid=102341: Sun Dec 15 13:40:16 2024 00:26:13.005 read: IOPS=292, BW=1170KiB/s (1198kB/s)(11.4MiB/10012msec) 00:26:13.005 slat (nsec): min=6265, max=60966, avg=8170.95, stdev=3989.99 00:26:13.005 clat (usec): min=379, max=42389, avg=13652.43, stdev=18961.15 00:26:13.005 lat (usec): min=386, max=42400, avg=13660.60, stdev=18961.17 00:26:13.005 clat percentiles (usec): 00:26:13.005 | 1.00th=[ 388], 5.00th=[ 396], 10.00th=[ 404], 20.00th=[ 416], 00:26:13.005 | 30.00th=[ 429], 40.00th=[ 441], 50.00th=[ 461], 60.00th=[ 494], 00:26:13.005 | 70.00th=[40633], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:26:13.005 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:26:13.005 | 99.99th=[42206] 00:26:13.005 bw ( KiB/s): min= 800, max= 1632, per=50.45%, avg=1169.60, stdev=224.71, samples=20 00:26:13.005 iops : min= 200, max= 408, avg=292.40, stdev=56.18, samples=20 00:26:13.005 lat (usec) : 500=61.10%, 750=5.19%, 1000=0.92% 00:26:13.005 lat (msec) : 4=0.14%, 50=32.65% 00:26:13.005 cpu : usr=96.12%, sys=3.32%, ctx=28, majf=0, minf=0 00:26:13.005 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:13.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.005 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.005 issued rwts: total=2928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.005 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:13.005 00:26:13.005 Run status group 0 (all jobs): 00:26:13.005 READ: bw=2317KiB/s (2373kB/s), 1150KiB/s-1170KiB/s (1177kB/s-1198kB/s), io=22.7MiB (23.8MB), run=10012-10034msec 00:26:13.005 13:40:16 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:13.005 13:40:16 -- target/dif.sh@43 -- # local sub 00:26:13.005 13:40:16 -- target/dif.sh@45 -- # for sub in "$@" 00:26:13.005 13:40:16 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:13.005 13:40:16 -- target/dif.sh@36 -- # local sub_id=0 00:26:13.005 13:40:16 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:13.005 13:40:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.005 13:40:16 -- common/autotest_common.sh@10 -- # set +x 00:26:13.005 13:40:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.005 13:40:16 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:13.005 13:40:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.005 13:40:16 -- common/autotest_common.sh@10 -- # set +x 00:26:13.005 13:40:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.005 13:40:16 -- target/dif.sh@45 -- # for sub in "$@" 00:26:13.005 13:40:16 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:13.005 13:40:16 -- target/dif.sh@36 -- # local sub_id=1 00:26:13.005 13:40:16 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:13.005 13:40:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.005 13:40:16 -- common/autotest_common.sh@10 -- # set +x 00:26:13.005 13:40:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.005 13:40:16 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:13.005 13:40:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.005 13:40:16 -- common/autotest_common.sh@10 -- # set +x 00:26:13.005 13:40:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.005 00:26:13.005 real 0m11.197s 00:26:13.005 user 0m20.062s 00:26:13.005 sys 0m0.953s 00:26:13.005 13:40:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:13.005 13:40:16 -- common/autotest_common.sh@10 -- # set +x 00:26:13.005 ************************************ 00:26:13.005 END TEST fio_dif_1_multi_subsystems 00:26:13.005 ************************************ 00:26:13.005 13:40:16 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:13.005 13:40:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:13.005 13:40:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:13.005 13:40:16 -- common/autotest_common.sh@10 -- # set +x 00:26:13.005 ************************************ 00:26:13.005 START TEST fio_dif_rand_params 00:26:13.005 ************************************ 00:26:13.005 13:40:16 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:26:13.005 13:40:16 -- target/dif.sh@100 -- # local NULL_DIF 00:26:13.005 13:40:16 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:13.005 13:40:16 -- target/dif.sh@103 -- # NULL_DIF=3 00:26:13.005 13:40:16 -- target/dif.sh@103 -- # bs=128k 00:26:13.005 13:40:16 -- target/dif.sh@103 -- # numjobs=3 00:26:13.005 13:40:16 -- target/dif.sh@103 -- # iodepth=3 00:26:13.005 13:40:16 -- target/dif.sh@103 -- # runtime=5 00:26:13.005 13:40:16 -- target/dif.sh@105 -- # create_subsystems 0 00:26:13.005 13:40:16 -- target/dif.sh@28 -- # local sub 00:26:13.005 13:40:16 -- target/dif.sh@30 -- # for sub in "$@" 00:26:13.005 13:40:16 -- target/dif.sh@31 -- # create_subsystem 0 00:26:13.005 13:40:16 -- target/dif.sh@18 -- # local sub_id=0 00:26:13.005 13:40:16 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:13.005 13:40:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.005 13:40:16 -- common/autotest_common.sh@10 -- # set +x 00:26:13.005 bdev_null0 00:26:13.005 13:40:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.006 13:40:16 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:13.006 13:40:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.006 13:40:16 -- common/autotest_common.sh@10 -- # set +x 00:26:13.006 13:40:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.006 13:40:16 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:13.006 13:40:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.006 13:40:16 -- common/autotest_common.sh@10 -- # set +x 00:26:13.006 13:40:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.006 13:40:16 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:13.006 13:40:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.006 13:40:16 -- common/autotest_common.sh@10 -- # set +x 00:26:13.006 [2024-12-15 13:40:16.889106] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.006 13:40:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.006 13:40:16 -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:13.006 13:40:16 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:13.006 13:40:16 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:13.006 13:40:16 -- nvmf/common.sh@520 -- # config=() 00:26:13.006 13:40:16 -- nvmf/common.sh@520 -- # local subsystem config 00:26:13.006 13:40:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:13.006 13:40:16 -- target/dif.sh@82 -- # gen_fio_conf 00:26:13.006 13:40:16 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:13.006 13:40:16 -- target/dif.sh@54 -- # local file 00:26:13.006 13:40:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:13.006 { 00:26:13.006 "params": { 00:26:13.006 "name": "Nvme$subsystem", 00:26:13.006 "trtype": "$TEST_TRANSPORT", 00:26:13.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:13.006 "adrfam": "ipv4", 00:26:13.006 "trsvcid": "$NVMF_PORT", 00:26:13.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:13.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:13.006 "hdgst": ${hdgst:-false}, 00:26:13.006 "ddgst": ${ddgst:-false} 00:26:13.006 }, 00:26:13.006 "method": "bdev_nvme_attach_controller" 00:26:13.006 } 00:26:13.006 EOF 00:26:13.006 )") 00:26:13.006 13:40:16 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:13.006 13:40:16 -- target/dif.sh@56 -- # cat 00:26:13.006 13:40:16 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:13.006 13:40:16 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:13.006 13:40:16 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:13.006 13:40:16 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:13.006 13:40:16 -- common/autotest_common.sh@1330 -- # shift 00:26:13.006 13:40:16 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:13.006 13:40:16 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:13.006 13:40:16 -- nvmf/common.sh@542 -- # cat 00:26:13.006 13:40:16 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:13.006 13:40:16 -- target/dif.sh@72 -- # (( file <= files )) 00:26:13.006 13:40:16 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:13.006 13:40:16 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:13.006 13:40:16 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:13.006 13:40:16 -- nvmf/common.sh@544 -- # jq . 00:26:13.006 13:40:16 -- nvmf/common.sh@545 -- # IFS=, 00:26:13.006 13:40:16 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:13.006 "params": { 00:26:13.006 "name": "Nvme0", 00:26:13.006 "trtype": "tcp", 00:26:13.006 "traddr": "10.0.0.2", 00:26:13.006 "adrfam": "ipv4", 00:26:13.006 "trsvcid": "4420", 00:26:13.006 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:13.006 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:13.006 "hdgst": false, 00:26:13.006 "ddgst": false 00:26:13.006 }, 00:26:13.006 "method": "bdev_nvme_attach_controller" 00:26:13.006 }' 00:26:13.006 13:40:16 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:13.006 13:40:16 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:13.006 13:40:16 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:13.006 13:40:16 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:13.006 13:40:16 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:13.006 13:40:16 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:13.006 13:40:16 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:13.006 13:40:16 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:13.006 13:40:16 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:13.006 13:40:16 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:13.006 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:13.006 ... 00:26:13.006 fio-3.35 00:26:13.006 Starting 3 threads 00:26:13.006 [2024-12-15 13:40:17.514456] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:13.006 [2024-12-15 13:40:17.514544] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:17.188 00:26:17.188 filename0: (groupid=0, jobs=1): err= 0: pid=102497: Sun Dec 15 13:40:22 2024 00:26:17.188 read: IOPS=264, BW=33.1MiB/s (34.7MB/s)(165MiB/5003msec) 00:26:17.188 slat (nsec): min=6408, max=59675, avg=9966.69, stdev=3582.22 00:26:17.188 clat (usec): min=5802, max=52713, avg=11326.19, stdev=3451.76 00:26:17.188 lat (usec): min=5810, max=52724, avg=11336.15, stdev=3451.77 00:26:17.188 clat percentiles (usec): 00:26:17.188 | 1.00th=[ 7111], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10552], 00:26:17.188 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:26:17.188 | 70.00th=[11469], 80.00th=[11731], 90.00th=[11994], 95.00th=[12387], 00:26:17.188 | 99.00th=[13304], 99.50th=[51119], 99.90th=[52691], 99.95th=[52691], 00:26:17.188 | 99.99th=[52691] 00:26:17.188 bw ( KiB/s): min=31744, max=36096, per=33.84%, avg=33817.60, stdev=1470.36, samples=10 00:26:17.188 iops : min= 248, max= 282, avg=264.20, stdev=11.49, samples=10 00:26:17.188 lat (msec) : 10=8.62%, 20=90.70%, 100=0.68% 00:26:17.188 cpu : usr=93.16%, sys=5.34%, ctx=5, majf=0, minf=9 00:26:17.188 IO depths : 1=7.0%, 2=93.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:17.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.188 issued rwts: total=1323,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.188 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:17.188 filename0: (groupid=0, jobs=1): err= 0: pid=102498: Sun Dec 15 13:40:22 2024 00:26:17.188 read: IOPS=301, BW=37.7MiB/s (39.6MB/s)(189MiB/5005msec) 00:26:17.188 slat (nsec): min=6476, max=77534, avg=11881.89, stdev=5290.91 00:26:17.188 clat (usec): min=5341, max=52399, avg=9918.37, stdev=2050.27 00:26:17.188 lat (usec): min=5352, max=52422, avg=9930.25, stdev=2050.52 00:26:17.188 clat percentiles (usec): 00:26:17.188 | 1.00th=[ 6521], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9372], 00:26:17.188 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:26:17.188 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10683], 95.00th=[11076], 00:26:17.188 | 99.00th=[11600], 99.50th=[12256], 99.90th=[51119], 99.95th=[52167], 00:26:17.188 | 99.99th=[52167] 00:26:17.188 bw ( KiB/s): min=36096, max=40192, per=38.66%, avg=38630.40, stdev=1075.67, samples=10 00:26:17.188 iops : min= 282, max= 314, avg=301.80, stdev= 8.40, samples=10 00:26:17.188 lat (msec) : 10=56.12%, 20=43.68%, 100=0.20% 00:26:17.188 cpu : usr=92.51%, sys=5.74%, ctx=7, majf=0, minf=0 00:26:17.188 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:17.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.188 issued rwts: total=1511,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.188 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:17.188 filename0: (groupid=0, jobs=1): err= 0: pid=102499: Sun Dec 15 13:40:22 2024 00:26:17.188 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(134MiB/5003msec) 00:26:17.188 slat (nsec): min=6480, max=60633, avg=11254.29, stdev=3926.97 00:26:17.188 clat (usec): min=3758, max=17924, avg=13970.55, stdev=1503.18 00:26:17.188 lat (usec): min=3768, max=17937, avg=13981.80, stdev=1503.49 00:26:17.188 clat percentiles (usec): 00:26:17.188 | 1.00th=[ 8291], 5.00th=[10945], 10.00th=[13042], 20.00th=[13435], 00:26:17.188 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14091], 60.00th=[14353], 00:26:17.188 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15270], 95.00th=[15664], 00:26:17.188 | 99.00th=[16450], 99.50th=[16909], 99.90th=[17171], 99.95th=[17957], 00:26:17.188 | 99.99th=[17957] 00:26:17.188 bw ( KiB/s): min=26112, max=29892, per=27.46%, avg=27442.22, stdev=1112.28, samples=9 00:26:17.188 iops : min= 204, max= 233, avg=214.33, stdev= 8.54, samples=9 00:26:17.188 lat (msec) : 4=0.09%, 10=4.66%, 20=95.25% 00:26:17.188 cpu : usr=93.46%, sys=5.16%, ctx=11, majf=0, minf=9 00:26:17.188 IO depths : 1=2.4%, 2=97.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:17.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.188 issued rwts: total=1073,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.188 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:17.188 00:26:17.188 Run status group 0 (all jobs): 00:26:17.188 READ: bw=97.6MiB/s (102MB/s), 26.8MiB/s-37.7MiB/s (28.1MB/s-39.6MB/s), io=488MiB (512MB), run=5003-5005msec 00:26:17.188 13:40:22 -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:17.188 13:40:22 -- target/dif.sh@43 -- # local sub 00:26:17.188 13:40:22 -- target/dif.sh@45 -- # for sub in "$@" 00:26:17.188 13:40:22 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:17.188 13:40:22 -- target/dif.sh@36 -- # local sub_id=0 00:26:17.188 13:40:22 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:17.188 13:40:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.188 13:40:22 -- common/autotest_common.sh@10 -- # set +x 00:26:17.188 13:40:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.188 13:40:22 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:17.188 13:40:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.188 13:40:22 -- common/autotest_common.sh@10 -- # set +x 00:26:17.188 13:40:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.188 13:40:22 -- target/dif.sh@109 -- # NULL_DIF=2 00:26:17.188 13:40:22 -- target/dif.sh@109 -- # bs=4k 00:26:17.188 13:40:22 -- target/dif.sh@109 -- # numjobs=8 00:26:17.188 13:40:22 -- target/dif.sh@109 -- # iodepth=16 00:26:17.188 13:40:22 -- target/dif.sh@109 -- # runtime= 00:26:17.188 13:40:22 -- target/dif.sh@109 -- # files=2 00:26:17.188 13:40:22 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:17.188 13:40:22 -- target/dif.sh@28 -- # local sub 00:26:17.188 13:40:22 -- target/dif.sh@30 -- # for sub in "$@" 00:26:17.188 13:40:22 -- target/dif.sh@31 -- # create_subsystem 0 00:26:17.188 13:40:22 -- target/dif.sh@18 -- # local sub_id=0 00:26:17.188 13:40:22 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:17.188 13:40:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.189 13:40:22 -- common/autotest_common.sh@10 -- # set +x 00:26:17.447 bdev_null0 00:26:17.447 13:40:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.447 13:40:22 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:17.447 13:40:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.447 13:40:22 -- common/autotest_common.sh@10 -- # set +x 00:26:17.447 13:40:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.447 13:40:22 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:17.447 13:40:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.447 13:40:22 -- common/autotest_common.sh@10 -- # set +x 00:26:17.447 13:40:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.447 13:40:22 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:17.447 13:40:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.447 13:40:22 -- common/autotest_common.sh@10 -- # set +x 00:26:17.447 [2024-12-15 13:40:22.901544] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:17.447 13:40:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.447 13:40:22 -- target/dif.sh@30 -- # for sub in "$@" 00:26:17.447 13:40:22 -- target/dif.sh@31 -- # create_subsystem 1 00:26:17.447 13:40:22 -- target/dif.sh@18 -- # local sub_id=1 00:26:17.447 13:40:22 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:17.447 13:40:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.447 13:40:22 -- common/autotest_common.sh@10 -- # set +x 00:26:17.447 bdev_null1 00:26:17.447 13:40:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.447 13:40:22 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:17.447 13:40:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.447 13:40:22 -- common/autotest_common.sh@10 -- # set +x 00:26:17.447 13:40:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.447 13:40:22 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:17.447 13:40:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.447 13:40:22 -- common/autotest_common.sh@10 -- # set +x 00:26:17.447 13:40:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.447 13:40:22 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:17.447 13:40:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.447 13:40:22 -- common/autotest_common.sh@10 -- # set +x 00:26:17.447 13:40:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.447 13:40:22 -- target/dif.sh@30 -- # for sub in "$@" 00:26:17.447 13:40:22 -- target/dif.sh@31 -- # create_subsystem 2 00:26:17.447 13:40:22 -- target/dif.sh@18 -- # local sub_id=2 00:26:17.447 13:40:22 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:17.447 13:40:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.447 13:40:22 -- common/autotest_common.sh@10 -- # set +x 00:26:17.447 bdev_null2 00:26:17.447 13:40:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.447 13:40:22 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:17.447 13:40:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.447 13:40:22 -- common/autotest_common.sh@10 -- # set +x 00:26:17.447 13:40:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.447 13:40:22 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:17.447 13:40:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.447 13:40:22 -- common/autotest_common.sh@10 -- # set +x 00:26:17.447 13:40:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.447 13:40:22 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:17.447 13:40:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.447 13:40:22 -- common/autotest_common.sh@10 -- # set +x 00:26:17.447 13:40:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.447 13:40:22 -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:17.447 13:40:22 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:17.447 13:40:22 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:17.447 13:40:22 -- nvmf/common.sh@520 -- # config=() 00:26:17.447 13:40:22 -- nvmf/common.sh@520 -- # local subsystem config 00:26:17.447 13:40:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:17.447 13:40:22 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:17.447 13:40:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:17.447 { 00:26:17.447 "params": { 00:26:17.447 "name": "Nvme$subsystem", 00:26:17.447 "trtype": "$TEST_TRANSPORT", 00:26:17.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.447 "adrfam": "ipv4", 00:26:17.447 "trsvcid": "$NVMF_PORT", 00:26:17.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.447 "hdgst": ${hdgst:-false}, 00:26:17.447 "ddgst": ${ddgst:-false} 00:26:17.447 }, 00:26:17.447 "method": "bdev_nvme_attach_controller" 00:26:17.447 } 00:26:17.447 EOF 00:26:17.447 )") 00:26:17.447 13:40:22 -- target/dif.sh@82 -- # gen_fio_conf 00:26:17.447 13:40:22 -- target/dif.sh@54 -- # local file 00:26:17.447 13:40:22 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:17.447 13:40:22 -- target/dif.sh@56 -- # cat 00:26:17.447 13:40:22 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:17.447 13:40:22 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:17.447 13:40:22 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:17.447 13:40:22 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:17.447 13:40:22 -- nvmf/common.sh@542 -- # cat 00:26:17.447 13:40:22 -- common/autotest_common.sh@1330 -- # shift 00:26:17.447 13:40:22 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:17.448 13:40:22 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:17.448 13:40:22 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:17.448 13:40:22 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:17.448 13:40:22 -- target/dif.sh@72 -- # (( file <= files )) 00:26:17.448 13:40:22 -- target/dif.sh@73 -- # cat 00:26:17.448 13:40:22 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:17.448 13:40:22 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:17.448 13:40:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:17.448 13:40:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:17.448 { 00:26:17.448 "params": { 00:26:17.448 "name": "Nvme$subsystem", 00:26:17.448 "trtype": "$TEST_TRANSPORT", 00:26:17.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.448 "adrfam": "ipv4", 00:26:17.448 "trsvcid": "$NVMF_PORT", 00:26:17.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.448 "hdgst": ${hdgst:-false}, 00:26:17.448 "ddgst": ${ddgst:-false} 00:26:17.448 }, 00:26:17.448 "method": "bdev_nvme_attach_controller" 00:26:17.448 } 00:26:17.448 EOF 00:26:17.448 )") 00:26:17.448 13:40:22 -- target/dif.sh@72 -- # (( file++ )) 00:26:17.448 13:40:22 -- target/dif.sh@72 -- # (( file <= files )) 00:26:17.448 13:40:22 -- target/dif.sh@73 -- # cat 00:26:17.448 13:40:22 -- nvmf/common.sh@542 -- # cat 00:26:17.448 13:40:22 -- target/dif.sh@72 -- # (( file++ )) 00:26:17.448 13:40:22 -- target/dif.sh@72 -- # (( file <= files )) 00:26:17.448 13:40:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:17.448 13:40:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:17.448 { 00:26:17.448 "params": { 00:26:17.448 "name": "Nvme$subsystem", 00:26:17.448 "trtype": "$TEST_TRANSPORT", 00:26:17.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.448 "adrfam": "ipv4", 00:26:17.448 "trsvcid": "$NVMF_PORT", 00:26:17.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.448 "hdgst": ${hdgst:-false}, 00:26:17.448 "ddgst": ${ddgst:-false} 00:26:17.448 }, 00:26:17.448 "method": "bdev_nvme_attach_controller" 00:26:17.448 } 00:26:17.448 EOF 00:26:17.448 )") 00:26:17.448 13:40:22 -- nvmf/common.sh@542 -- # cat 00:26:17.448 13:40:22 -- nvmf/common.sh@544 -- # jq . 00:26:17.448 13:40:22 -- nvmf/common.sh@545 -- # IFS=, 00:26:17.448 13:40:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:17.448 "params": { 00:26:17.448 "name": "Nvme0", 00:26:17.448 "trtype": "tcp", 00:26:17.448 "traddr": "10.0.0.2", 00:26:17.448 "adrfam": "ipv4", 00:26:17.448 "trsvcid": "4420", 00:26:17.448 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:17.448 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:17.448 "hdgst": false, 00:26:17.448 "ddgst": false 00:26:17.448 }, 00:26:17.448 "method": "bdev_nvme_attach_controller" 00:26:17.448 },{ 00:26:17.448 "params": { 00:26:17.448 "name": "Nvme1", 00:26:17.448 "trtype": "tcp", 00:26:17.448 "traddr": "10.0.0.2", 00:26:17.448 "adrfam": "ipv4", 00:26:17.448 "trsvcid": "4420", 00:26:17.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:17.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:17.448 "hdgst": false, 00:26:17.448 "ddgst": false 00:26:17.448 }, 00:26:17.448 "method": "bdev_nvme_attach_controller" 00:26:17.448 },{ 00:26:17.448 "params": { 00:26:17.448 "name": "Nvme2", 00:26:17.448 "trtype": "tcp", 00:26:17.448 "traddr": "10.0.0.2", 00:26:17.448 "adrfam": "ipv4", 00:26:17.448 "trsvcid": "4420", 00:26:17.448 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:17.448 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:17.448 "hdgst": false, 00:26:17.448 "ddgst": false 00:26:17.448 }, 00:26:17.448 "method": "bdev_nvme_attach_controller" 00:26:17.448 }' 00:26:17.448 13:40:23 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:17.448 13:40:23 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:17.448 13:40:23 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:17.448 13:40:23 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:17.448 13:40:23 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:17.448 13:40:23 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:17.448 13:40:23 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:17.448 13:40:23 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:17.448 13:40:23 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:17.448 13:40:23 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:17.705 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:17.705 ... 00:26:17.705 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:17.705 ... 00:26:17.705 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:17.705 ... 00:26:17.705 fio-3.35 00:26:17.705 Starting 24 threads 00:26:18.271 [2024-12-15 13:40:23.815474] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:18.271 [2024-12-15 13:40:23.815536] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:30.467 00:26:30.467 filename0: (groupid=0, jobs=1): err= 0: pid=102594: Sun Dec 15 13:40:34 2024 00:26:30.467 read: IOPS=192, BW=769KiB/s (788kB/s)(7692KiB/10001msec) 00:26:30.467 slat (usec): min=7, max=8025, avg=31.21, stdev=408.10 00:26:30.467 clat (msec): min=33, max=167, avg=83.06, stdev=24.61 00:26:30.467 lat (msec): min=33, max=167, avg=83.09, stdev=24.62 00:26:30.467 clat percentiles (msec): 00:26:30.467 | 1.00th=[ 38], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 63], 00:26:30.467 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 81], 60.00th=[ 85], 00:26:30.467 | 70.00th=[ 94], 80.00th=[ 105], 90.00th=[ 121], 95.00th=[ 131], 00:26:30.467 | 99.00th=[ 142], 99.50th=[ 157], 99.90th=[ 167], 99.95th=[ 167], 00:26:30.467 | 99.99th=[ 167] 00:26:30.467 bw ( KiB/s): min= 512, max= 1072, per=3.65%, avg=762.32, stdev=124.76, samples=19 00:26:30.467 iops : min= 128, max= 268, avg=190.58, stdev=31.19, samples=19 00:26:30.467 lat (msec) : 50=9.31%, 100=70.46%, 250=20.23% 00:26:30.467 cpu : usr=33.73%, sys=0.74%, ctx=885, majf=0, minf=9 00:26:30.467 IO depths : 1=2.8%, 2=5.8%, 4=14.9%, 8=66.3%, 16=10.2%, 32=0.0%, >=64=0.0% 00:26:30.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.467 complete : 0=0.0%, 4=91.4%, 8=3.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.467 issued rwts: total=1923,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.467 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.467 filename0: (groupid=0, jobs=1): err= 0: pid=102595: Sun Dec 15 13:40:34 2024 00:26:30.467 read: IOPS=229, BW=917KiB/s (939kB/s)(9200KiB/10037msec) 00:26:30.467 slat (usec): min=7, max=8022, avg=24.00, stdev=306.77 00:26:30.467 clat (msec): min=27, max=167, avg=69.57, stdev=25.30 00:26:30.467 lat (msec): min=27, max=167, avg=69.59, stdev=25.29 00:26:30.467 clat percentiles (msec): 00:26:30.467 | 1.00th=[ 35], 5.00th=[ 40], 10.00th=[ 44], 20.00th=[ 48], 00:26:30.467 | 30.00th=[ 53], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 71], 00:26:30.467 | 70.00th=[ 78], 80.00th=[ 88], 90.00th=[ 108], 95.00th=[ 122], 00:26:30.467 | 99.00th=[ 144], 99.50th=[ 161], 99.90th=[ 169], 99.95th=[ 169], 00:26:30.467 | 99.99th=[ 169] 00:26:30.467 bw ( KiB/s): min= 512, max= 1144, per=4.37%, avg=913.15, stdev=178.46, samples=20 00:26:30.467 iops : min= 128, max= 286, avg=228.25, stdev=44.61, samples=20 00:26:30.467 lat (msec) : 50=26.74%, 100=60.26%, 250=13.00% 00:26:30.467 cpu : usr=37.66%, sys=1.02%, ctx=1123, majf=0, minf=9 00:26:30.467 IO depths : 1=0.6%, 2=1.5%, 4=8.2%, 8=76.7%, 16=13.0%, 32=0.0%, >=64=0.0% 00:26:30.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.467 complete : 0=0.0%, 4=89.5%, 8=6.0%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.467 issued rwts: total=2300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.467 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.467 filename0: (groupid=0, jobs=1): err= 0: pid=102596: Sun Dec 15 13:40:34 2024 00:26:30.467 read: IOPS=221, BW=886KiB/s (908kB/s)(8892KiB/10032msec) 00:26:30.467 slat (usec): min=3, max=4022, avg=12.45, stdev=85.16 00:26:30.467 clat (msec): min=31, max=158, avg=72.05, stdev=22.41 00:26:30.467 lat (msec): min=31, max=158, avg=72.06, stdev=22.41 00:26:30.467 clat percentiles (msec): 00:26:30.467 | 1.00th=[ 39], 5.00th=[ 42], 10.00th=[ 46], 20.00th=[ 50], 00:26:30.467 | 30.00th=[ 59], 40.00th=[ 65], 50.00th=[ 68], 60.00th=[ 72], 00:26:30.467 | 70.00th=[ 82], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 114], 00:26:30.467 | 99.00th=[ 125], 99.50th=[ 130], 99.90th=[ 159], 99.95th=[ 159], 00:26:30.467 | 99.99th=[ 159] 00:26:30.467 bw ( KiB/s): min= 640, max= 1168, per=4.22%, avg=882.45, stdev=168.10, samples=20 00:26:30.468 iops : min= 160, max= 292, avg=220.55, stdev=42.05, samples=20 00:26:30.468 lat (msec) : 50=20.69%, 100=64.37%, 250=14.93% 00:26:30.468 cpu : usr=45.98%, sys=1.06%, ctx=1241, majf=0, minf=9 00:26:30.468 IO depths : 1=2.3%, 2=4.9%, 4=13.9%, 8=68.0%, 16=10.9%, 32=0.0%, >=64=0.0% 00:26:30.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.468 complete : 0=0.0%, 4=90.7%, 8=4.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.468 issued rwts: total=2223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.468 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.468 filename0: (groupid=0, jobs=1): err= 0: pid=102597: Sun Dec 15 13:40:34 2024 00:26:30.468 read: IOPS=235, BW=944KiB/s (966kB/s)(9468KiB/10034msec) 00:26:30.468 slat (usec): min=7, max=8024, avg=26.14, stdev=339.23 00:26:30.468 clat (msec): min=25, max=156, avg=67.61, stdev=21.95 00:26:30.468 lat (msec): min=25, max=156, avg=67.64, stdev=21.96 00:26:30.468 clat percentiles (msec): 00:26:30.468 | 1.00th=[ 33], 5.00th=[ 39], 10.00th=[ 44], 20.00th=[ 48], 00:26:30.468 | 30.00th=[ 54], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 70], 00:26:30.468 | 70.00th=[ 73], 80.00th=[ 85], 90.00th=[ 99], 95.00th=[ 108], 00:26:30.468 | 99.00th=[ 132], 99.50th=[ 134], 99.90th=[ 157], 99.95th=[ 157], 00:26:30.468 | 99.99th=[ 157] 00:26:30.468 bw ( KiB/s): min= 688, max= 1200, per=4.49%, avg=939.65, stdev=137.47, samples=20 00:26:30.468 iops : min= 172, max= 300, avg=234.85, stdev=34.39, samples=20 00:26:30.468 lat (msec) : 50=24.59%, 100=65.95%, 250=9.46% 00:26:30.468 cpu : usr=38.18%, sys=0.95%, ctx=1070, majf=0, minf=9 00:26:30.468 IO depths : 1=0.8%, 2=2.1%, 4=9.1%, 8=75.0%, 16=13.0%, 32=0.0%, >=64=0.0% 00:26:30.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.468 complete : 0=0.0%, 4=90.0%, 8=5.6%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.468 issued rwts: total=2367,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.468 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.468 filename0: (groupid=0, jobs=1): err= 0: pid=102598: Sun Dec 15 13:40:34 2024 00:26:30.468 read: IOPS=200, BW=801KiB/s (820kB/s)(8020KiB/10011msec) 00:26:30.468 slat (usec): min=5, max=8024, avg=22.50, stdev=309.74 00:26:30.468 clat (msec): min=35, max=151, avg=79.76, stdev=20.95 00:26:30.468 lat (msec): min=35, max=151, avg=79.78, stdev=20.94 00:26:30.468 clat percentiles (msec): 00:26:30.468 | 1.00th=[ 39], 5.00th=[ 46], 10.00th=[ 56], 20.00th=[ 61], 00:26:30.468 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 85], 00:26:30.468 | 70.00th=[ 91], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 116], 00:26:30.468 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 153], 99.95th=[ 153], 00:26:30.468 | 99.99th=[ 153] 00:26:30.468 bw ( KiB/s): min= 640, max= 1048, per=3.80%, avg=795.05, stdev=122.88, samples=19 00:26:30.468 iops : min= 160, max= 262, avg=198.68, stdev=30.70, samples=19 00:26:30.468 lat (msec) : 50=8.13%, 100=75.21%, 250=16.66% 00:26:30.468 cpu : usr=37.08%, sys=0.76%, ctx=1014, majf=0, minf=9 00:26:30.468 IO depths : 1=1.9%, 2=4.3%, 4=12.9%, 8=69.5%, 16=11.3%, 32=0.0%, >=64=0.0% 00:26:30.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.468 complete : 0=0.0%, 4=91.0%, 8=4.1%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.468 issued rwts: total=2005,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.468 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.468 filename0: (groupid=0, jobs=1): err= 0: pid=102599: Sun Dec 15 13:40:34 2024 00:26:30.468 read: IOPS=196, BW=787KiB/s (806kB/s)(7880KiB/10007msec) 00:26:30.468 slat (usec): min=3, max=4023, avg=12.40, stdev=90.48 00:26:30.468 clat (msec): min=35, max=155, avg=81.19, stdev=22.30 00:26:30.468 lat (msec): min=35, max=155, avg=81.20, stdev=22.30 00:26:30.468 clat percentiles (msec): 00:26:30.468 | 1.00th=[ 37], 5.00th=[ 49], 10.00th=[ 59], 20.00th=[ 63], 00:26:30.468 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 85], 00:26:30.468 | 70.00th=[ 93], 80.00th=[ 100], 90.00th=[ 111], 95.00th=[ 122], 00:26:30.468 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:26:30.468 | 99.99th=[ 157] 00:26:30.468 bw ( KiB/s): min= 592, max= 1067, per=3.74%, avg=781.40, stdev=119.63, samples=20 00:26:30.468 iops : min= 148, max= 266, avg=195.30, stdev=29.82, samples=20 00:26:30.468 lat (msec) : 50=6.40%, 100=75.69%, 250=17.92% 00:26:30.468 cpu : usr=36.87%, sys=0.80%, ctx=996, majf=0, minf=9 00:26:30.468 IO depths : 1=2.9%, 2=6.4%, 4=17.0%, 8=63.4%, 16=10.4%, 32=0.0%, >=64=0.0% 00:26:30.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.468 complete : 0=0.0%, 4=92.1%, 8=2.9%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.468 issued rwts: total=1970,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.468 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.468 filename0: (groupid=0, jobs=1): err= 0: pid=102600: Sun Dec 15 13:40:34 2024 00:26:30.468 read: IOPS=194, BW=776KiB/s (795kB/s)(7768KiB/10009msec) 00:26:30.468 slat (usec): min=4, max=4017, avg=13.73, stdev=113.93 00:26:30.468 clat (msec): min=17, max=143, avg=82.36, stdev=22.56 00:26:30.468 lat (msec): min=17, max=143, avg=82.38, stdev=22.56 00:26:30.468 clat percentiles (msec): 00:26:30.468 | 1.00th=[ 34], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 64], 00:26:30.468 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 87], 00:26:30.468 | 70.00th=[ 95], 80.00th=[ 104], 90.00th=[ 112], 95.00th=[ 127], 00:26:30.468 | 99.00th=[ 133], 99.50th=[ 140], 99.90th=[ 144], 99.95th=[ 144], 00:26:30.468 | 99.99th=[ 144] 00:26:30.468 bw ( KiB/s): min= 512, max= 944, per=3.69%, avg=770.90, stdev=97.19, samples=20 00:26:30.468 iops : min= 128, max= 236, avg=192.70, stdev=24.30, samples=20 00:26:30.468 lat (msec) : 20=0.72%, 50=6.49%, 100=71.32%, 250=21.47% 00:26:30.468 cpu : usr=36.37%, sys=0.89%, ctx=1184, majf=0, minf=9 00:26:30.468 IO depths : 1=2.7%, 2=6.1%, 4=16.0%, 8=65.2%, 16=10.1%, 32=0.0%, >=64=0.0% 00:26:30.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.468 complete : 0=0.0%, 4=91.7%, 8=2.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.468 issued rwts: total=1942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.468 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.468 filename0: (groupid=0, jobs=1): err= 0: pid=102601: Sun Dec 15 13:40:34 2024 00:26:30.468 read: IOPS=219, BW=876KiB/s (897kB/s)(8772KiB/10010msec) 00:26:30.468 slat (usec): min=4, max=8026, avg=21.49, stdev=296.19 00:26:30.468 clat (msec): min=23, max=163, avg=72.86, stdev=22.51 00:26:30.468 lat (msec): min=23, max=163, avg=72.88, stdev=22.52 00:26:30.468 clat percentiles (msec): 00:26:30.468 | 1.00th=[ 35], 5.00th=[ 38], 10.00th=[ 46], 20.00th=[ 54], 00:26:30.468 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 74], 00:26:30.468 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 105], 95.00th=[ 109], 00:26:30.468 | 99.00th=[ 132], 99.50th=[ 138], 99.90th=[ 165], 99.95th=[ 165], 00:26:30.468 | 99.99th=[ 165] 00:26:30.468 bw ( KiB/s): min= 600, max= 1248, per=4.16%, avg=870.45, stdev=149.81, samples=20 00:26:30.468 iops : min= 150, max= 312, avg=217.60, stdev=37.46, samples=20 00:26:30.468 lat (msec) : 50=17.60%, 100=69.31%, 250=13.09% 00:26:30.468 cpu : usr=35.92%, sys=0.89%, ctx=1119, majf=0, minf=9 00:26:30.468 IO depths : 1=1.1%, 2=2.7%, 4=10.1%, 8=73.5%, 16=12.5%, 32=0.0%, >=64=0.0% 00:26:30.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.468 complete : 0=0.0%, 4=90.2%, 8=5.4%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.468 issued rwts: total=2193,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.468 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.468 filename1: (groupid=0, jobs=1): err= 0: pid=102602: Sun Dec 15 13:40:34 2024 00:26:30.468 read: IOPS=246, BW=985KiB/s (1009kB/s)(9916KiB/10064msec) 00:26:30.468 slat (usec): min=6, max=5014, avg=12.24, stdev=100.58 00:26:30.468 clat (usec): min=1708, max=142025, avg=64788.25, stdev=25087.06 00:26:30.468 lat (usec): min=1726, max=142044, avg=64800.49, stdev=25089.44 00:26:30.468 clat percentiles (usec): 00:26:30.468 | 1.00th=[ 1893], 5.00th=[ 27395], 10.00th=[ 39584], 20.00th=[ 45876], 00:26:30.468 | 30.00th=[ 50070], 40.00th=[ 58983], 50.00th=[ 63177], 60.00th=[ 69731], 00:26:30.468 | 70.00th=[ 72877], 80.00th=[ 84411], 90.00th=[ 99091], 95.00th=[106431], 00:26:30.468 | 99.00th=[135267], 99.50th=[135267], 99.90th=[141558], 99.95th=[141558], 00:26:30.468 | 99.99th=[141558] 00:26:30.468 bw ( KiB/s): min= 640, max= 2176, per=4.71%, avg=984.80, stdev=322.65, samples=20 00:26:30.468 iops : min= 160, max= 544, avg=246.15, stdev=80.69, samples=20 00:26:30.468 lat (msec) : 2=1.21%, 4=1.37%, 10=1.29%, 20=0.65%, 50=25.33% 00:26:30.468 lat (msec) : 100=61.23%, 250=8.91% 00:26:30.468 cpu : usr=41.67%, sys=0.99%, ctx=1529, majf=0, minf=0 00:26:30.468 IO depths : 1=1.9%, 2=4.0%, 4=11.7%, 8=70.8%, 16=11.6%, 32=0.0%, >=64=0.0% 00:26:30.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.468 complete : 0=0.0%, 4=90.5%, 8=4.9%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.468 issued rwts: total=2479,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.468 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.468 filename1: (groupid=0, jobs=1): err= 0: pid=102603: Sun Dec 15 13:40:34 2024 00:26:30.468 read: IOPS=213, BW=853KiB/s (873kB/s)(8548KiB/10022msec) 00:26:30.468 slat (usec): min=3, max=8103, avg=18.33, stdev=211.52 00:26:30.468 clat (msec): min=31, max=155, avg=74.90, stdev=22.53 00:26:30.468 lat (msec): min=31, max=155, avg=74.92, stdev=22.53 00:26:30.468 clat percentiles (msec): 00:26:30.468 | 1.00th=[ 39], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 56], 00:26:30.469 | 30.00th=[ 62], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 78], 00:26:30.469 | 70.00th=[ 87], 80.00th=[ 95], 90.00th=[ 104], 95.00th=[ 114], 00:26:30.469 | 99.00th=[ 136], 99.50th=[ 138], 99.90th=[ 157], 99.95th=[ 157], 00:26:30.469 | 99.99th=[ 157] 00:26:30.469 bw ( KiB/s): min= 640, max= 1120, per=4.06%, avg=848.05, stdev=147.23, samples=20 00:26:30.469 iops : min= 160, max= 280, avg=212.00, stdev=36.82, samples=20 00:26:30.469 lat (msec) : 50=14.79%, 100=71.78%, 250=13.43% 00:26:30.469 cpu : usr=42.12%, sys=0.94%, ctx=1346, majf=0, minf=9 00:26:30.469 IO depths : 1=1.3%, 2=2.7%, 4=9.1%, 8=74.3%, 16=12.6%, 32=0.0%, >=64=0.0% 00:26:30.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.469 complete : 0=0.0%, 4=90.2%, 8=5.5%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.469 issued rwts: total=2137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.469 filename1: (groupid=0, jobs=1): err= 0: pid=102604: Sun Dec 15 13:40:34 2024 00:26:30.469 read: IOPS=191, BW=766KiB/s (785kB/s)(7668KiB/10006msec) 00:26:30.469 slat (usec): min=4, max=8020, avg=18.62, stdev=258.69 00:26:30.469 clat (msec): min=36, max=150, avg=83.38, stdev=23.00 00:26:30.469 lat (msec): min=36, max=150, avg=83.40, stdev=23.00 00:26:30.469 clat percentiles (msec): 00:26:30.469 | 1.00th=[ 41], 5.00th=[ 51], 10.00th=[ 59], 20.00th=[ 64], 00:26:30.469 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 81], 60.00th=[ 88], 00:26:30.469 | 70.00th=[ 95], 80.00th=[ 101], 90.00th=[ 115], 95.00th=[ 132], 00:26:30.469 | 99.00th=[ 144], 99.50th=[ 150], 99.90th=[ 150], 99.95th=[ 150], 00:26:30.469 | 99.99th=[ 150] 00:26:30.469 bw ( KiB/s): min= 512, max= 1000, per=3.63%, avg=759.58, stdev=120.99, samples=19 00:26:30.469 iops : min= 128, max= 250, avg=189.89, stdev=30.25, samples=19 00:26:30.469 lat (msec) : 50=4.43%, 100=77.31%, 250=18.26% 00:26:30.469 cpu : usr=40.05%, sys=0.98%, ctx=1112, majf=0, minf=9 00:26:30.469 IO depths : 1=4.3%, 2=9.0%, 4=19.9%, 8=58.6%, 16=8.2%, 32=0.0%, >=64=0.0% 00:26:30.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.469 complete : 0=0.0%, 4=92.7%, 8=1.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.469 issued rwts: total=1917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.469 filename1: (groupid=0, jobs=1): err= 0: pid=102605: Sun Dec 15 13:40:34 2024 00:26:30.469 read: IOPS=229, BW=917KiB/s (939kB/s)(9204KiB/10034msec) 00:26:30.469 slat (usec): min=7, max=8023, avg=20.39, stdev=289.19 00:26:30.469 clat (msec): min=33, max=155, avg=69.59, stdev=21.47 00:26:30.469 lat (msec): min=33, max=155, avg=69.61, stdev=21.48 00:26:30.469 clat percentiles (msec): 00:26:30.469 | 1.00th=[ 36], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 48], 00:26:30.469 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 72], 00:26:30.469 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 109], 00:26:30.469 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 157], 99.95th=[ 157], 00:26:30.469 | 99.99th=[ 157] 00:26:30.469 bw ( KiB/s): min= 664, max= 1152, per=4.38%, avg=915.30, stdev=134.95, samples=20 00:26:30.469 iops : min= 166, max= 288, avg=228.75, stdev=33.71, samples=20 00:26:30.469 lat (msec) : 50=24.47%, 100=67.84%, 250=7.69% 00:26:30.469 cpu : usr=32.48%, sys=0.70%, ctx=869, majf=0, minf=9 00:26:30.469 IO depths : 1=0.4%, 2=0.9%, 4=8.4%, 8=76.7%, 16=13.6%, 32=0.0%, >=64=0.0% 00:26:30.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.469 complete : 0=0.0%, 4=89.7%, 8=6.2%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.469 issued rwts: total=2301,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.469 filename1: (groupid=0, jobs=1): err= 0: pid=102606: Sun Dec 15 13:40:34 2024 00:26:30.469 read: IOPS=233, BW=934KiB/s (956kB/s)(9368KiB/10033msec) 00:26:30.469 slat (usec): min=3, max=4019, avg=13.75, stdev=117.17 00:26:30.469 clat (msec): min=25, max=170, avg=68.36, stdev=23.85 00:26:30.469 lat (msec): min=25, max=170, avg=68.38, stdev=23.85 00:26:30.469 clat percentiles (msec): 00:26:30.469 | 1.00th=[ 30], 5.00th=[ 39], 10.00th=[ 42], 20.00th=[ 48], 00:26:30.469 | 30.00th=[ 52], 40.00th=[ 60], 50.00th=[ 64], 60.00th=[ 71], 00:26:30.469 | 70.00th=[ 75], 80.00th=[ 87], 90.00th=[ 104], 95.00th=[ 116], 00:26:30.469 | 99.00th=[ 136], 99.50th=[ 140], 99.90th=[ 171], 99.95th=[ 171], 00:26:30.469 | 99.99th=[ 171] 00:26:30.469 bw ( KiB/s): min= 640, max= 1248, per=4.47%, avg=934.05, stdev=158.46, samples=20 00:26:30.469 iops : min= 160, max= 312, avg=233.45, stdev=39.67, samples=20 00:26:30.469 lat (msec) : 50=26.73%, 100=62.43%, 250=10.85% 00:26:30.469 cpu : usr=38.22%, sys=0.97%, ctx=1054, majf=0, minf=9 00:26:30.469 IO depths : 1=1.0%, 2=2.1%, 4=8.6%, 8=75.7%, 16=12.6%, 32=0.0%, >=64=0.0% 00:26:30.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.469 complete : 0=0.0%, 4=89.7%, 8=5.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.469 issued rwts: total=2342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.469 filename1: (groupid=0, jobs=1): err= 0: pid=102607: Sun Dec 15 13:40:34 2024 00:26:30.469 read: IOPS=213, BW=852KiB/s (873kB/s)(8540KiB/10022msec) 00:26:30.469 slat (usec): min=7, max=8023, avg=25.38, stdev=297.09 00:26:30.469 clat (msec): min=34, max=143, avg=74.84, stdev=21.74 00:26:30.469 lat (msec): min=34, max=143, avg=74.87, stdev=21.74 00:26:30.469 clat percentiles (msec): 00:26:30.469 | 1.00th=[ 38], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 58], 00:26:30.469 | 30.00th=[ 63], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 77], 00:26:30.469 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 105], 95.00th=[ 113], 00:26:30.469 | 99.00th=[ 138], 99.50th=[ 138], 99.90th=[ 144], 99.95th=[ 144], 00:26:30.469 | 99.99th=[ 144] 00:26:30.469 bw ( KiB/s): min= 640, max= 1072, per=4.05%, avg=847.25, stdev=113.84, samples=20 00:26:30.469 iops : min= 160, max= 268, avg=211.75, stdev=28.50, samples=20 00:26:30.469 lat (msec) : 50=14.00%, 100=72.04%, 250=13.96% 00:26:30.469 cpu : usr=41.64%, sys=0.87%, ctx=1279, majf=0, minf=9 00:26:30.469 IO depths : 1=2.3%, 2=5.2%, 4=14.5%, 8=67.3%, 16=10.6%, 32=0.0%, >=64=0.0% 00:26:30.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.469 complete : 0=0.0%, 4=91.1%, 8=3.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.469 issued rwts: total=2135,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.469 filename1: (groupid=0, jobs=1): err= 0: pid=102608: Sun Dec 15 13:40:34 2024 00:26:30.469 read: IOPS=226, BW=905KiB/s (927kB/s)(9076KiB/10031msec) 00:26:30.469 slat (usec): min=7, max=8021, avg=16.12, stdev=188.50 00:26:30.469 clat (msec): min=30, max=155, avg=70.54, stdev=21.71 00:26:30.469 lat (msec): min=30, max=155, avg=70.55, stdev=21.71 00:26:30.469 clat percentiles (msec): 00:26:30.469 | 1.00th=[ 35], 5.00th=[ 45], 10.00th=[ 47], 20.00th=[ 50], 00:26:30.469 | 30.00th=[ 60], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 72], 00:26:30.469 | 70.00th=[ 77], 80.00th=[ 86], 90.00th=[ 97], 95.00th=[ 111], 00:26:30.469 | 99.00th=[ 134], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:26:30.469 | 99.99th=[ 157] 00:26:30.469 bw ( KiB/s): min= 640, max= 1152, per=4.33%, avg=904.80, stdev=132.73, samples=20 00:26:30.469 iops : min= 160, max= 288, avg=226.15, stdev=33.20, samples=20 00:26:30.469 lat (msec) : 50=20.85%, 100=70.60%, 250=8.55% 00:26:30.469 cpu : usr=33.51%, sys=0.97%, ctx=897, majf=0, minf=10 00:26:30.469 IO depths : 1=0.6%, 2=1.6%, 4=8.1%, 8=76.4%, 16=13.4%, 32=0.0%, >=64=0.0% 00:26:30.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.469 complete : 0=0.0%, 4=89.6%, 8=6.3%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.469 issued rwts: total=2269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.469 filename1: (groupid=0, jobs=1): err= 0: pid=102609: Sun Dec 15 13:40:34 2024 00:26:30.469 read: IOPS=196, BW=787KiB/s (806kB/s)(7880KiB/10008msec) 00:26:30.469 slat (usec): min=4, max=8023, avg=25.25, stdev=325.23 00:26:30.469 clat (msec): min=38, max=177, avg=81.08, stdev=23.32 00:26:30.469 lat (msec): min=38, max=177, avg=81.10, stdev=23.31 00:26:30.469 clat percentiles (msec): 00:26:30.469 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 64], 00:26:30.469 | 30.00th=[ 68], 40.00th=[ 71], 50.00th=[ 77], 60.00th=[ 84], 00:26:30.469 | 70.00th=[ 90], 80.00th=[ 100], 90.00th=[ 112], 95.00th=[ 126], 00:26:30.469 | 99.00th=[ 155], 99.50th=[ 161], 99.90th=[ 178], 99.95th=[ 178], 00:26:30.469 | 99.99th=[ 178] 00:26:30.469 bw ( KiB/s): min= 512, max= 1056, per=3.76%, avg=785.30, stdev=112.11, samples=20 00:26:30.469 iops : min= 128, max= 264, avg=196.30, stdev=28.03, samples=20 00:26:30.469 lat (msec) : 50=6.35%, 100=75.08%, 250=18.58% 00:26:30.469 cpu : usr=39.18%, sys=0.98%, ctx=1163, majf=0, minf=9 00:26:30.469 IO depths : 1=2.0%, 2=4.7%, 4=13.7%, 8=68.3%, 16=11.3%, 32=0.0%, >=64=0.0% 00:26:30.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.469 complete : 0=0.0%, 4=91.0%, 8=4.1%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.469 issued rwts: total=1970,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.469 filename2: (groupid=0, jobs=1): err= 0: pid=102610: Sun Dec 15 13:40:34 2024 00:26:30.469 read: IOPS=227, BW=911KiB/s (933kB/s)(9156KiB/10054msec) 00:26:30.469 slat (usec): min=3, max=7980, avg=19.25, stdev=220.85 00:26:30.469 clat (msec): min=8, max=162, avg=70.07, stdev=21.89 00:26:30.469 lat (msec): min=8, max=162, avg=70.09, stdev=21.88 00:26:30.469 clat percentiles (msec): 00:26:30.470 | 1.00th=[ 16], 5.00th=[ 42], 10.00th=[ 46], 20.00th=[ 52], 00:26:30.470 | 30.00th=[ 60], 40.00th=[ 64], 50.00th=[ 69], 60.00th=[ 72], 00:26:30.470 | 70.00th=[ 75], 80.00th=[ 85], 90.00th=[ 100], 95.00th=[ 108], 00:26:30.470 | 99.00th=[ 133], 99.50th=[ 144], 99.90th=[ 163], 99.95th=[ 163], 00:26:30.470 | 99.99th=[ 163] 00:26:30.470 bw ( KiB/s): min= 688, max= 1200, per=4.35%, avg=909.25, stdev=136.54, samples=20 00:26:30.470 iops : min= 172, max= 300, avg=227.25, stdev=34.08, samples=20 00:26:30.470 lat (msec) : 10=0.70%, 20=0.70%, 50=17.43%, 100=71.91%, 250=9.26% 00:26:30.470 cpu : usr=39.39%, sys=0.84%, ctx=1110, majf=0, minf=9 00:26:30.470 IO depths : 1=1.4%, 2=3.4%, 4=11.6%, 8=71.3%, 16=12.3%, 32=0.0%, >=64=0.0% 00:26:30.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.470 complete : 0=0.0%, 4=90.5%, 8=5.1%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.470 issued rwts: total=2289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.470 filename2: (groupid=0, jobs=1): err= 0: pid=102611: Sun Dec 15 13:40:34 2024 00:26:30.470 read: IOPS=238, BW=955KiB/s (978kB/s)(9608KiB/10057msec) 00:26:30.470 slat (usec): min=4, max=11036, avg=20.48, stdev=290.61 00:26:30.470 clat (msec): min=5, max=144, avg=66.86, stdev=22.52 00:26:30.470 lat (msec): min=5, max=155, avg=66.88, stdev=22.53 00:26:30.470 clat percentiles (msec): 00:26:30.470 | 1.00th=[ 17], 5.00th=[ 37], 10.00th=[ 42], 20.00th=[ 48], 00:26:30.470 | 30.00th=[ 54], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 72], 00:26:30.470 | 70.00th=[ 75], 80.00th=[ 86], 90.00th=[ 96], 95.00th=[ 108], 00:26:30.470 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:26:30.470 | 99.99th=[ 144] 00:26:30.470 bw ( KiB/s): min= 640, max= 1504, per=4.57%, avg=954.35, stdev=165.22, samples=20 00:26:30.470 iops : min= 160, max= 376, avg=238.55, stdev=41.30, samples=20 00:26:30.470 lat (msec) : 10=0.96%, 20=0.67%, 50=24.48%, 100=66.65%, 250=7.24% 00:26:30.470 cpu : usr=38.44%, sys=0.89%, ctx=1126, majf=0, minf=9 00:26:30.470 IO depths : 1=0.8%, 2=2.1%, 4=9.1%, 8=75.2%, 16=12.8%, 32=0.0%, >=64=0.0% 00:26:30.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.470 complete : 0=0.0%, 4=90.0%, 8=5.5%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.470 issued rwts: total=2402,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.470 filename2: (groupid=0, jobs=1): err= 0: pid=102612: Sun Dec 15 13:40:34 2024 00:26:30.470 read: IOPS=227, BW=908KiB/s (930kB/s)(9120KiB/10039msec) 00:26:30.470 slat (nsec): min=7550, max=58173, avg=10476.58, stdev=4797.35 00:26:30.470 clat (msec): min=27, max=176, avg=70.39, stdev=21.92 00:26:30.470 lat (msec): min=27, max=176, avg=70.40, stdev=21.92 00:26:30.470 clat percentiles (msec): 00:26:30.470 | 1.00th=[ 36], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 48], 00:26:30.470 | 30.00th=[ 60], 40.00th=[ 63], 50.00th=[ 70], 60.00th=[ 72], 00:26:30.470 | 70.00th=[ 83], 80.00th=[ 87], 90.00th=[ 99], 95.00th=[ 108], 00:26:30.470 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 178], 99.95th=[ 178], 00:26:30.470 | 99.99th=[ 178] 00:26:30.470 bw ( KiB/s): min= 688, max= 1200, per=4.33%, avg=905.25, stdev=145.10, samples=20 00:26:30.470 iops : min= 172, max= 300, avg=226.25, stdev=36.30, samples=20 00:26:30.470 lat (msec) : 50=23.73%, 100=68.90%, 250=7.37% 00:26:30.470 cpu : usr=32.38%, sys=0.82%, ctx=874, majf=0, minf=9 00:26:30.470 IO depths : 1=0.5%, 2=1.1%, 4=6.8%, 8=77.8%, 16=13.9%, 32=0.0%, >=64=0.0% 00:26:30.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.470 complete : 0=0.0%, 4=89.3%, 8=6.9%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.470 issued rwts: total=2280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.470 filename2: (groupid=0, jobs=1): err= 0: pid=102613: Sun Dec 15 13:40:34 2024 00:26:30.470 read: IOPS=200, BW=803KiB/s (822kB/s)(8048KiB/10021msec) 00:26:30.470 slat (nsec): min=3842, max=37801, avg=11093.02, stdev=4398.82 00:26:30.470 clat (msec): min=29, max=157, avg=79.62, stdev=23.49 00:26:30.470 lat (msec): min=29, max=157, avg=79.64, stdev=23.49 00:26:30.470 clat percentiles (msec): 00:26:30.470 | 1.00th=[ 38], 5.00th=[ 45], 10.00th=[ 51], 20.00th=[ 62], 00:26:30.470 | 30.00th=[ 66], 40.00th=[ 70], 50.00th=[ 75], 60.00th=[ 83], 00:26:30.470 | 70.00th=[ 91], 80.00th=[ 97], 90.00th=[ 112], 95.00th=[ 126], 00:26:30.470 | 99.00th=[ 138], 99.50th=[ 144], 99.90th=[ 159], 99.95th=[ 159], 00:26:30.470 | 99.99th=[ 159] 00:26:30.470 bw ( KiB/s): min= 456, max= 1280, per=3.82%, avg=798.15, stdev=161.97, samples=20 00:26:30.470 iops : min= 114, max= 320, avg=199.50, stdev=40.51, samples=20 00:26:30.470 lat (msec) : 50=8.85%, 100=75.50%, 250=15.66% 00:26:30.470 cpu : usr=49.63%, sys=1.11%, ctx=1280, majf=0, minf=9 00:26:30.470 IO depths : 1=1.1%, 2=2.3%, 4=8.8%, 8=74.0%, 16=13.8%, 32=0.0%, >=64=0.0% 00:26:30.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.470 complete : 0=0.0%, 4=90.0%, 8=6.6%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.470 issued rwts: total=2012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.470 filename2: (groupid=0, jobs=1): err= 0: pid=102614: Sun Dec 15 13:40:34 2024 00:26:30.470 read: IOPS=248, BW=995KiB/s (1019kB/s)(9.77MiB/10054msec) 00:26:30.470 slat (usec): min=4, max=8022, avg=17.34, stdev=215.72 00:26:30.470 clat (msec): min=8, max=141, avg=64.15, stdev=20.47 00:26:30.470 lat (msec): min=8, max=141, avg=64.17, stdev=20.47 00:26:30.470 clat percentiles (msec): 00:26:30.470 | 1.00th=[ 12], 5.00th=[ 36], 10.00th=[ 42], 20.00th=[ 48], 00:26:30.470 | 30.00th=[ 52], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 69], 00:26:30.470 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 92], 95.00th=[ 101], 00:26:30.470 | 99.00th=[ 115], 99.50th=[ 124], 99.90th=[ 142], 99.95th=[ 142], 00:26:30.470 | 99.99th=[ 142] 00:26:30.470 bw ( KiB/s): min= 728, max= 1504, per=4.76%, avg=995.10, stdev=168.64, samples=20 00:26:30.470 iops : min= 182, max= 376, avg=248.75, stdev=42.16, samples=20 00:26:30.470 lat (msec) : 10=0.28%, 20=1.64%, 50=26.44%, 100=65.96%, 250=5.68% 00:26:30.470 cpu : usr=35.08%, sys=0.89%, ctx=1058, majf=0, minf=9 00:26:30.470 IO depths : 1=0.8%, 2=1.9%, 4=8.4%, 8=75.8%, 16=13.1%, 32=0.0%, >=64=0.0% 00:26:30.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.470 complete : 0=0.0%, 4=89.7%, 8=6.1%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.470 issued rwts: total=2500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.470 filename2: (groupid=0, jobs=1): err= 0: pid=102615: Sun Dec 15 13:40:34 2024 00:26:30.470 read: IOPS=243, BW=975KiB/s (998kB/s)(9800KiB/10055msec) 00:26:30.470 slat (usec): min=3, max=8019, avg=17.39, stdev=219.16 00:26:30.470 clat (msec): min=6, max=156, avg=65.47, stdev=24.43 00:26:30.470 lat (msec): min=6, max=156, avg=65.48, stdev=24.43 00:26:30.470 clat percentiles (msec): 00:26:30.470 | 1.00th=[ 12], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 47], 00:26:30.470 | 30.00th=[ 49], 40.00th=[ 56], 50.00th=[ 62], 60.00th=[ 70], 00:26:30.470 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 101], 95.00th=[ 109], 00:26:30.470 | 99.00th=[ 133], 99.50th=[ 140], 99.90th=[ 157], 99.95th=[ 157], 00:26:30.470 | 99.99th=[ 157] 00:26:30.470 bw ( KiB/s): min= 638, max= 1328, per=4.66%, avg=973.55, stdev=194.57, samples=20 00:26:30.470 iops : min= 159, max= 332, avg=243.35, stdev=48.71, samples=20 00:26:30.470 lat (msec) : 10=0.65%, 20=1.31%, 50=31.10%, 100=57.10%, 250=9.84% 00:26:30.470 cpu : usr=37.05%, sys=0.88%, ctx=1039, majf=0, minf=9 00:26:30.470 IO depths : 1=0.6%, 2=1.4%, 4=8.2%, 8=76.7%, 16=13.1%, 32=0.0%, >=64=0.0% 00:26:30.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.470 complete : 0=0.0%, 4=89.4%, 8=6.2%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.470 issued rwts: total=2450,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.470 filename2: (groupid=0, jobs=1): err= 0: pid=102616: Sun Dec 15 13:40:34 2024 00:26:30.470 read: IOPS=228, BW=915KiB/s (937kB/s)(9180KiB/10033msec) 00:26:30.470 slat (usec): min=5, max=8022, avg=20.37, stdev=257.01 00:26:30.470 clat (msec): min=24, max=150, avg=69.81, stdev=21.98 00:26:30.470 lat (msec): min=24, max=150, avg=69.83, stdev=21.98 00:26:30.470 clat percentiles (msec): 00:26:30.470 | 1.00th=[ 33], 5.00th=[ 38], 10.00th=[ 45], 20.00th=[ 50], 00:26:30.470 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 72], 00:26:30.470 | 70.00th=[ 75], 80.00th=[ 86], 90.00th=[ 105], 95.00th=[ 111], 00:26:30.470 | 99.00th=[ 129], 99.50th=[ 134], 99.90th=[ 150], 99.95th=[ 150], 00:26:30.470 | 99.99th=[ 150] 00:26:30.470 bw ( KiB/s): min= 696, max= 1192, per=4.36%, avg=910.85, stdev=137.20, samples=20 00:26:30.470 iops : min= 174, max= 298, avg=227.65, stdev=34.33, samples=20 00:26:30.470 lat (msec) : 50=21.22%, 100=67.49%, 250=11.29% 00:26:30.470 cpu : usr=38.58%, sys=0.96%, ctx=1265, majf=0, minf=9 00:26:30.470 IO depths : 1=1.0%, 2=2.1%, 4=8.7%, 8=75.4%, 16=12.8%, 32=0.0%, >=64=0.0% 00:26:30.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.470 complete : 0=0.0%, 4=89.8%, 8=5.8%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.470 issued rwts: total=2295,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.470 filename2: (groupid=0, jobs=1): err= 0: pid=102617: Sun Dec 15 13:40:34 2024 00:26:30.470 read: IOPS=187, BW=749KiB/s (767kB/s)(7488KiB/10003msec) 00:26:30.470 slat (usec): min=7, max=8029, avg=19.05, stdev=261.92 00:26:30.471 clat (msec): min=36, max=170, avg=85.30, stdev=23.38 00:26:30.471 lat (msec): min=36, max=170, avg=85.32, stdev=23.38 00:26:30.471 clat percentiles (msec): 00:26:30.471 | 1.00th=[ 46], 5.00th=[ 51], 10.00th=[ 61], 20.00th=[ 63], 00:26:30.471 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 85], 60.00th=[ 87], 00:26:30.471 | 70.00th=[ 96], 80.00th=[ 107], 90.00th=[ 120], 95.00th=[ 130], 00:26:30.471 | 99.00th=[ 144], 99.50th=[ 167], 99.90th=[ 171], 99.95th=[ 171], 00:26:30.471 | 99.99th=[ 171] 00:26:30.471 bw ( KiB/s): min= 634, max= 1072, per=3.53%, avg=738.26, stdev=103.41, samples=19 00:26:30.471 iops : min= 158, max= 268, avg=184.53, stdev=25.88, samples=19 00:26:30.471 lat (msec) : 50=5.72%, 100=70.57%, 250=23.72% 00:26:30.471 cpu : usr=32.42%, sys=0.74%, ctx=876, majf=0, minf=9 00:26:30.471 IO depths : 1=2.7%, 2=5.9%, 4=16.2%, 8=64.8%, 16=10.4%, 32=0.0%, >=64=0.0% 00:26:30.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.471 complete : 0=0.0%, 4=91.6%, 8=3.2%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.471 issued rwts: total=1872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.471 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.471 00:26:30.471 Run status group 0 (all jobs): 00:26:30.471 READ: bw=20.4MiB/s (21.4MB/s), 749KiB/s-995KiB/s (767kB/s-1019kB/s), io=205MiB (215MB), run=10001-10064msec 00:26:30.471 13:40:34 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:30.471 13:40:34 -- target/dif.sh@43 -- # local sub 00:26:30.471 13:40:34 -- target/dif.sh@45 -- # for sub in "$@" 00:26:30.471 13:40:34 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:30.471 13:40:34 -- target/dif.sh@36 -- # local sub_id=0 00:26:30.471 13:40:34 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:30.471 13:40:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.471 13:40:34 -- common/autotest_common.sh@10 -- # set +x 00:26:30.471 13:40:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.471 13:40:34 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:30.471 13:40:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.471 13:40:34 -- common/autotest_common.sh@10 -- # set +x 00:26:30.471 13:40:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.471 13:40:34 -- target/dif.sh@45 -- # for sub in "$@" 00:26:30.471 13:40:34 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:30.471 13:40:34 -- target/dif.sh@36 -- # local sub_id=1 00:26:30.471 13:40:34 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:30.471 13:40:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.471 13:40:34 -- common/autotest_common.sh@10 -- # set +x 00:26:30.471 13:40:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.471 13:40:34 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:30.471 13:40:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.471 13:40:34 -- common/autotest_common.sh@10 -- # set +x 00:26:30.471 13:40:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.471 13:40:34 -- target/dif.sh@45 -- # for sub in "$@" 00:26:30.471 13:40:34 -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:30.471 13:40:34 -- target/dif.sh@36 -- # local sub_id=2 00:26:30.471 13:40:34 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:30.471 13:40:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.471 13:40:34 -- common/autotest_common.sh@10 -- # set +x 00:26:30.471 13:40:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.471 13:40:34 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:30.471 13:40:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.471 13:40:34 -- common/autotest_common.sh@10 -- # set +x 00:26:30.471 13:40:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.471 13:40:34 -- target/dif.sh@115 -- # NULL_DIF=1 00:26:30.471 13:40:34 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:30.471 13:40:34 -- target/dif.sh@115 -- # numjobs=2 00:26:30.471 13:40:34 -- target/dif.sh@115 -- # iodepth=8 00:26:30.471 13:40:34 -- target/dif.sh@115 -- # runtime=5 00:26:30.471 13:40:34 -- target/dif.sh@115 -- # files=1 00:26:30.471 13:40:34 -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:30.471 13:40:34 -- target/dif.sh@28 -- # local sub 00:26:30.471 13:40:34 -- target/dif.sh@30 -- # for sub in "$@" 00:26:30.471 13:40:34 -- target/dif.sh@31 -- # create_subsystem 0 00:26:30.471 13:40:34 -- target/dif.sh@18 -- # local sub_id=0 00:26:30.471 13:40:34 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:30.471 13:40:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.471 13:40:34 -- common/autotest_common.sh@10 -- # set +x 00:26:30.471 bdev_null0 00:26:30.471 13:40:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.471 13:40:34 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:30.471 13:40:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.471 13:40:34 -- common/autotest_common.sh@10 -- # set +x 00:26:30.471 13:40:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.471 13:40:34 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:30.471 13:40:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.471 13:40:34 -- common/autotest_common.sh@10 -- # set +x 00:26:30.471 13:40:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.471 13:40:34 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:30.471 13:40:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.471 13:40:34 -- common/autotest_common.sh@10 -- # set +x 00:26:30.471 [2024-12-15 13:40:34.316568] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:30.471 13:40:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.471 13:40:34 -- target/dif.sh@30 -- # for sub in "$@" 00:26:30.471 13:40:34 -- target/dif.sh@31 -- # create_subsystem 1 00:26:30.471 13:40:34 -- target/dif.sh@18 -- # local sub_id=1 00:26:30.471 13:40:34 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:30.471 13:40:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.471 13:40:34 -- common/autotest_common.sh@10 -- # set +x 00:26:30.471 bdev_null1 00:26:30.471 13:40:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.471 13:40:34 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:30.471 13:40:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.471 13:40:34 -- common/autotest_common.sh@10 -- # set +x 00:26:30.471 13:40:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.471 13:40:34 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:30.471 13:40:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.471 13:40:34 -- common/autotest_common.sh@10 -- # set +x 00:26:30.471 13:40:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.471 13:40:34 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:30.471 13:40:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.471 13:40:34 -- common/autotest_common.sh@10 -- # set +x 00:26:30.471 13:40:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.471 13:40:34 -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:30.471 13:40:34 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:30.471 13:40:34 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:30.471 13:40:34 -- nvmf/common.sh@520 -- # config=() 00:26:30.471 13:40:34 -- nvmf/common.sh@520 -- # local subsystem config 00:26:30.471 13:40:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:30.471 13:40:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:30.471 { 00:26:30.471 "params": { 00:26:30.471 "name": "Nvme$subsystem", 00:26:30.471 "trtype": "$TEST_TRANSPORT", 00:26:30.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.471 "adrfam": "ipv4", 00:26:30.471 "trsvcid": "$NVMF_PORT", 00:26:30.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.471 "hdgst": ${hdgst:-false}, 00:26:30.471 "ddgst": ${ddgst:-false} 00:26:30.471 }, 00:26:30.471 "method": "bdev_nvme_attach_controller" 00:26:30.471 } 00:26:30.471 EOF 00:26:30.471 )") 00:26:30.471 13:40:34 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:30.471 13:40:34 -- target/dif.sh@82 -- # gen_fio_conf 00:26:30.471 13:40:34 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:30.471 13:40:34 -- target/dif.sh@54 -- # local file 00:26:30.471 13:40:34 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:30.471 13:40:34 -- target/dif.sh@56 -- # cat 00:26:30.471 13:40:34 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:30.471 13:40:34 -- nvmf/common.sh@542 -- # cat 00:26:30.471 13:40:34 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:30.471 13:40:34 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:30.471 13:40:34 -- common/autotest_common.sh@1330 -- # shift 00:26:30.471 13:40:34 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:30.471 13:40:34 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:30.471 13:40:34 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:30.471 13:40:34 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:30.471 13:40:34 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:30.471 13:40:34 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:30.471 13:40:34 -- target/dif.sh@72 -- # (( file <= files )) 00:26:30.471 13:40:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:30.472 13:40:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:30.472 { 00:26:30.472 "params": { 00:26:30.472 "name": "Nvme$subsystem", 00:26:30.472 "trtype": "$TEST_TRANSPORT", 00:26:30.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.472 "adrfam": "ipv4", 00:26:30.472 "trsvcid": "$NVMF_PORT", 00:26:30.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.472 "hdgst": ${hdgst:-false}, 00:26:30.472 "ddgst": ${ddgst:-false} 00:26:30.472 }, 00:26:30.472 "method": "bdev_nvme_attach_controller" 00:26:30.472 } 00:26:30.472 EOF 00:26:30.472 )") 00:26:30.472 13:40:34 -- target/dif.sh@73 -- # cat 00:26:30.472 13:40:34 -- nvmf/common.sh@542 -- # cat 00:26:30.472 13:40:34 -- target/dif.sh@72 -- # (( file++ )) 00:26:30.472 13:40:34 -- target/dif.sh@72 -- # (( file <= files )) 00:26:30.472 13:40:34 -- nvmf/common.sh@544 -- # jq . 00:26:30.472 13:40:34 -- nvmf/common.sh@545 -- # IFS=, 00:26:30.472 13:40:34 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:30.472 "params": { 00:26:30.472 "name": "Nvme0", 00:26:30.472 "trtype": "tcp", 00:26:30.472 "traddr": "10.0.0.2", 00:26:30.472 "adrfam": "ipv4", 00:26:30.472 "trsvcid": "4420", 00:26:30.472 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:30.472 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:30.472 "hdgst": false, 00:26:30.472 "ddgst": false 00:26:30.472 }, 00:26:30.472 "method": "bdev_nvme_attach_controller" 00:26:30.472 },{ 00:26:30.472 "params": { 00:26:30.472 "name": "Nvme1", 00:26:30.472 "trtype": "tcp", 00:26:30.472 "traddr": "10.0.0.2", 00:26:30.472 "adrfam": "ipv4", 00:26:30.472 "trsvcid": "4420", 00:26:30.472 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:30.472 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:30.472 "hdgst": false, 00:26:30.472 "ddgst": false 00:26:30.472 }, 00:26:30.472 "method": "bdev_nvme_attach_controller" 00:26:30.472 }' 00:26:30.472 13:40:34 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:30.472 13:40:34 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:30.472 13:40:34 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:30.472 13:40:34 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:30.472 13:40:34 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:30.472 13:40:34 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:30.472 13:40:34 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:30.472 13:40:34 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:30.472 13:40:34 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:30.472 13:40:34 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:30.472 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:30.472 ... 00:26:30.472 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:30.472 ... 00:26:30.472 fio-3.35 00:26:30.472 Starting 4 threads 00:26:30.472 [2024-12-15 13:40:35.048210] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:30.472 [2024-12-15 13:40:35.048262] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:34.659 00:26:34.659 filename0: (groupid=0, jobs=1): err= 0: pid=102753: Sun Dec 15 13:40:40 2024 00:26:34.659 read: IOPS=2040, BW=15.9MiB/s (16.7MB/s)(80.4MiB/5042msec) 00:26:34.659 slat (nsec): min=6261, max=92640, avg=8864.63, stdev=5154.26 00:26:34.659 clat (usec): min=2210, max=41566, avg=3850.00, stdev=565.00 00:26:34.659 lat (usec): min=2230, max=41577, avg=3858.87, stdev=565.05 00:26:34.659 clat percentiles (usec): 00:26:34.659 | 1.00th=[ 3326], 5.00th=[ 3654], 10.00th=[ 3687], 20.00th=[ 3752], 00:26:34.659 | 30.00th=[ 3752], 40.00th=[ 3785], 50.00th=[ 3818], 60.00th=[ 3851], 00:26:34.659 | 70.00th=[ 3884], 80.00th=[ 3949], 90.00th=[ 4047], 95.00th=[ 4178], 00:26:34.659 | 99.00th=[ 4490], 99.50th=[ 4752], 99.90th=[ 5473], 99.95th=[ 5538], 00:26:34.659 | 99.99th=[41681] 00:26:34.659 bw ( KiB/s): min=15840, max=16752, per=25.21%, avg=16457.60, stdev=262.28, samples=10 00:26:34.659 iops : min= 1980, max= 2094, avg=2057.20, stdev=32.78, samples=10 00:26:34.659 lat (msec) : 4=86.72%, 10=13.27%, 50=0.02% 00:26:34.659 cpu : usr=94.70%, sys=3.99%, ctx=26, majf=0, minf=9 00:26:34.659 IO depths : 1=9.9%, 2=25.0%, 4=50.0%, 8=15.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:34.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.659 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.659 issued rwts: total=10290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.659 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:34.659 filename0: (groupid=0, jobs=1): err= 0: pid=102754: Sun Dec 15 13:40:40 2024 00:26:34.659 read: IOPS=2056, BW=16.1MiB/s (16.8MB/s)(80.4MiB/5003msec) 00:26:34.659 slat (usec): min=6, max=108, avg=12.78, stdev= 7.60 00:26:34.659 clat (usec): min=2158, max=6402, avg=3836.29, stdev=212.95 00:26:34.659 lat (usec): min=2175, max=6427, avg=3849.07, stdev=212.80 00:26:34.659 clat percentiles (usec): 00:26:34.659 | 1.00th=[ 2999], 5.00th=[ 3654], 10.00th=[ 3687], 20.00th=[ 3720], 00:26:34.659 | 30.00th=[ 3752], 40.00th=[ 3785], 50.00th=[ 3818], 60.00th=[ 3851], 00:26:34.659 | 70.00th=[ 3884], 80.00th=[ 3949], 90.00th=[ 4015], 95.00th=[ 4146], 00:26:34.659 | 99.00th=[ 4621], 99.50th=[ 4752], 99.90th=[ 5211], 99.95th=[ 5407], 00:26:34.659 | 99.99th=[ 5407] 00:26:34.659 bw ( KiB/s): min=16256, max=16640, per=25.27%, avg=16497.78, stdev=134.92, samples=9 00:26:34.659 iops : min= 2032, max= 2080, avg=2062.22, stdev=16.87, samples=9 00:26:34.659 lat (msec) : 4=87.75%, 10=12.25% 00:26:34.659 cpu : usr=95.10%, sys=3.58%, ctx=15, majf=0, minf=0 00:26:34.659 IO depths : 1=8.9%, 2=25.0%, 4=50.0%, 8=16.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:34.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.659 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.659 issued rwts: total=10288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.659 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:34.659 filename1: (groupid=0, jobs=1): err= 0: pid=102755: Sun Dec 15 13:40:40 2024 00:26:34.659 read: IOPS=2056, BW=16.1MiB/s (16.8MB/s)(80.3MiB/5001msec) 00:26:34.659 slat (nsec): min=6271, max=80501, avg=9336.88, stdev=5301.83 00:26:34.659 clat (usec): min=1904, max=7910, avg=3852.15, stdev=288.37 00:26:34.659 lat (usec): min=1915, max=7917, avg=3861.48, stdev=288.32 00:26:34.659 clat percentiles (usec): 00:26:34.659 | 1.00th=[ 3163], 5.00th=[ 3589], 10.00th=[ 3687], 20.00th=[ 3752], 00:26:34.659 | 30.00th=[ 3785], 40.00th=[ 3785], 50.00th=[ 3818], 60.00th=[ 3851], 00:26:34.659 | 70.00th=[ 3884], 80.00th=[ 3949], 90.00th=[ 4080], 95.00th=[ 4293], 00:26:34.660 | 99.00th=[ 4621], 99.50th=[ 5080], 99.90th=[ 6194], 99.95th=[ 6456], 00:26:34.660 | 99.99th=[ 6783] 00:26:34.660 bw ( KiB/s): min=16256, max=16640, per=25.28%, avg=16506.67, stdev=135.53, samples=9 00:26:34.660 iops : min= 2032, max= 2080, avg=2063.33, stdev=16.94, samples=9 00:26:34.660 lat (msec) : 2=0.02%, 4=85.04%, 10=14.94% 00:26:34.660 cpu : usr=94.40%, sys=4.28%, ctx=5, majf=0, minf=0 00:26:34.660 IO depths : 1=3.8%, 2=11.6%, 4=63.4%, 8=21.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:34.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.660 complete : 0=0.0%, 4=89.7%, 8=10.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.660 issued rwts: total=10283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.660 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:34.660 filename1: (groupid=0, jobs=1): err= 0: pid=102756: Sun Dec 15 13:40:40 2024 00:26:34.660 read: IOPS=2056, BW=16.1MiB/s (16.8MB/s)(80.4MiB/5002msec) 00:26:34.660 slat (usec): min=6, max=110, avg=14.44, stdev= 8.94 00:26:34.660 clat (usec): min=1387, max=6912, avg=3818.23, stdev=265.08 00:26:34.660 lat (usec): min=1400, max=6920, avg=3832.68, stdev=265.43 00:26:34.660 clat percentiles (usec): 00:26:34.660 | 1.00th=[ 2933], 5.00th=[ 3621], 10.00th=[ 3654], 20.00th=[ 3720], 00:26:34.660 | 30.00th=[ 3752], 40.00th=[ 3752], 50.00th=[ 3785], 60.00th=[ 3818], 00:26:34.660 | 70.00th=[ 3851], 80.00th=[ 3916], 90.00th=[ 4015], 95.00th=[ 4113], 00:26:34.660 | 99.00th=[ 4686], 99.50th=[ 5604], 99.90th=[ 6259], 99.95th=[ 6390], 00:26:34.660 | 99.99th=[ 6652] 00:26:34.660 bw ( KiB/s): min=16256, max=16640, per=25.27%, avg=16501.33, stdev=130.48, samples=9 00:26:34.660 iops : min= 2032, max= 2080, avg=2062.67, stdev=16.31, samples=9 00:26:34.660 lat (msec) : 2=0.04%, 4=89.29%, 10=10.67% 00:26:34.660 cpu : usr=94.86%, sys=3.80%, ctx=8, majf=0, minf=0 00:26:34.660 IO depths : 1=6.0%, 2=25.0%, 4=50.0%, 8=19.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:34.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.660 complete : 0=0.0%, 4=89.5%, 8=10.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.660 issued rwts: total=10288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.660 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:34.660 00:26:34.660 Run status group 0 (all jobs): 00:26:34.660 READ: bw=63.8MiB/s (66.9MB/s), 15.9MiB/s-16.1MiB/s (16.7MB/s-16.8MB/s), io=321MiB (337MB), run=5001-5042msec 00:26:34.918 13:40:40 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:34.918 13:40:40 -- target/dif.sh@43 -- # local sub 00:26:34.918 13:40:40 -- target/dif.sh@45 -- # for sub in "$@" 00:26:34.918 13:40:40 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:34.918 13:40:40 -- target/dif.sh@36 -- # local sub_id=0 00:26:34.918 13:40:40 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:34.918 13:40:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.918 13:40:40 -- common/autotest_common.sh@10 -- # set +x 00:26:34.918 13:40:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.918 13:40:40 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:34.918 13:40:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.918 13:40:40 -- common/autotest_common.sh@10 -- # set +x 00:26:34.918 13:40:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.918 13:40:40 -- target/dif.sh@45 -- # for sub in "$@" 00:26:34.918 13:40:40 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:34.918 13:40:40 -- target/dif.sh@36 -- # local sub_id=1 00:26:34.918 13:40:40 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:34.918 13:40:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.918 13:40:40 -- common/autotest_common.sh@10 -- # set +x 00:26:34.918 13:40:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.918 13:40:40 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:34.918 13:40:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.918 13:40:40 -- common/autotest_common.sh@10 -- # set +x 00:26:34.918 ************************************ 00:26:34.918 END TEST fio_dif_rand_params 00:26:34.918 ************************************ 00:26:34.918 13:40:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.918 00:26:34.918 real 0m23.583s 00:26:34.918 user 2m7.091s 00:26:34.918 sys 0m4.588s 00:26:34.918 13:40:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:34.918 13:40:40 -- common/autotest_common.sh@10 -- # set +x 00:26:34.918 13:40:40 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:34.918 13:40:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:34.918 13:40:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:34.918 13:40:40 -- common/autotest_common.sh@10 -- # set +x 00:26:34.918 ************************************ 00:26:34.918 START TEST fio_dif_digest 00:26:34.918 ************************************ 00:26:34.918 13:40:40 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:26:34.918 13:40:40 -- target/dif.sh@123 -- # local NULL_DIF 00:26:34.918 13:40:40 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:34.918 13:40:40 -- target/dif.sh@125 -- # local hdgst ddgst 00:26:34.918 13:40:40 -- target/dif.sh@127 -- # NULL_DIF=3 00:26:34.918 13:40:40 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:34.918 13:40:40 -- target/dif.sh@127 -- # numjobs=3 00:26:34.918 13:40:40 -- target/dif.sh@127 -- # iodepth=3 00:26:34.918 13:40:40 -- target/dif.sh@127 -- # runtime=10 00:26:34.918 13:40:40 -- target/dif.sh@128 -- # hdgst=true 00:26:34.918 13:40:40 -- target/dif.sh@128 -- # ddgst=true 00:26:34.918 13:40:40 -- target/dif.sh@130 -- # create_subsystems 0 00:26:34.918 13:40:40 -- target/dif.sh@28 -- # local sub 00:26:34.918 13:40:40 -- target/dif.sh@30 -- # for sub in "$@" 00:26:34.918 13:40:40 -- target/dif.sh@31 -- # create_subsystem 0 00:26:34.918 13:40:40 -- target/dif.sh@18 -- # local sub_id=0 00:26:34.918 13:40:40 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:34.918 13:40:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.918 13:40:40 -- common/autotest_common.sh@10 -- # set +x 00:26:34.918 bdev_null0 00:26:34.918 13:40:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.918 13:40:40 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:34.918 13:40:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.918 13:40:40 -- common/autotest_common.sh@10 -- # set +x 00:26:34.918 13:40:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.918 13:40:40 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:34.918 13:40:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.919 13:40:40 -- common/autotest_common.sh@10 -- # set +x 00:26:34.919 13:40:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.919 13:40:40 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:34.919 13:40:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.919 13:40:40 -- common/autotest_common.sh@10 -- # set +x 00:26:34.919 [2024-12-15 13:40:40.538555] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:34.919 13:40:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.919 13:40:40 -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:34.919 13:40:40 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:34.919 13:40:40 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:34.919 13:40:40 -- nvmf/common.sh@520 -- # config=() 00:26:34.919 13:40:40 -- nvmf/common.sh@520 -- # local subsystem config 00:26:34.919 13:40:40 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:34.919 13:40:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:34.919 13:40:40 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:34.919 13:40:40 -- target/dif.sh@82 -- # gen_fio_conf 00:26:34.919 13:40:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:34.919 { 00:26:34.919 "params": { 00:26:34.919 "name": "Nvme$subsystem", 00:26:34.919 "trtype": "$TEST_TRANSPORT", 00:26:34.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.919 "adrfam": "ipv4", 00:26:34.919 "trsvcid": "$NVMF_PORT", 00:26:34.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.919 "hdgst": ${hdgst:-false}, 00:26:34.919 "ddgst": ${ddgst:-false} 00:26:34.919 }, 00:26:34.919 "method": "bdev_nvme_attach_controller" 00:26:34.919 } 00:26:34.919 EOF 00:26:34.919 )") 00:26:34.919 13:40:40 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:34.919 13:40:40 -- target/dif.sh@54 -- # local file 00:26:34.919 13:40:40 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:34.919 13:40:40 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:34.919 13:40:40 -- target/dif.sh@56 -- # cat 00:26:34.919 13:40:40 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:34.919 13:40:40 -- common/autotest_common.sh@1330 -- # shift 00:26:34.919 13:40:40 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:34.919 13:40:40 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:34.919 13:40:40 -- nvmf/common.sh@542 -- # cat 00:26:34.919 13:40:40 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:34.919 13:40:40 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:34.919 13:40:40 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:34.919 13:40:40 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:34.919 13:40:40 -- target/dif.sh@72 -- # (( file <= files )) 00:26:34.919 13:40:40 -- nvmf/common.sh@544 -- # jq . 00:26:34.919 13:40:40 -- nvmf/common.sh@545 -- # IFS=, 00:26:34.919 13:40:40 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:34.919 "params": { 00:26:34.919 "name": "Nvme0", 00:26:34.919 "trtype": "tcp", 00:26:34.919 "traddr": "10.0.0.2", 00:26:34.919 "adrfam": "ipv4", 00:26:34.919 "trsvcid": "4420", 00:26:34.919 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:34.919 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:34.919 "hdgst": true, 00:26:34.919 "ddgst": true 00:26:34.919 }, 00:26:34.919 "method": "bdev_nvme_attach_controller" 00:26:34.919 }' 00:26:34.919 13:40:40 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:34.919 13:40:40 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:34.919 13:40:40 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:34.919 13:40:40 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:34.919 13:40:40 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:34.919 13:40:40 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:34.919 13:40:40 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:34.919 13:40:40 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:34.919 13:40:40 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:34.919 13:40:40 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:35.177 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:35.177 ... 00:26:35.177 fio-3.35 00:26:35.177 Starting 3 threads 00:26:35.435 [2024-12-15 13:40:41.093572] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:35.435 [2024-12-15 13:40:41.094080] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:47.686 00:26:47.686 filename0: (groupid=0, jobs=1): err= 0: pid=102858: Sun Dec 15 13:40:51 2024 00:26:47.686 read: IOPS=255, BW=32.0MiB/s (33.5MB/s)(320MiB/10008msec) 00:26:47.686 slat (nsec): min=6613, max=65242, avg=12829.61, stdev=5150.13 00:26:47.686 clat (usec): min=8781, max=55832, avg=11707.11, stdev=3476.17 00:26:47.686 lat (usec): min=8792, max=55852, avg=11719.94, stdev=3476.68 00:26:47.686 clat percentiles (usec): 00:26:47.686 | 1.00th=[ 9765], 5.00th=[10290], 10.00th=[10683], 20.00th=[10945], 00:26:47.686 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:26:47.686 | 70.00th=[11731], 80.00th=[11863], 90.00th=[12256], 95.00th=[12649], 00:26:47.686 | 99.00th=[13435], 99.50th=[51119], 99.90th=[53740], 99.95th=[54789], 00:26:47.686 | 99.99th=[55837] 00:26:47.686 bw ( KiB/s): min=27648, max=34304, per=37.55%, avg=32742.40, stdev=1817.61, samples=20 00:26:47.686 iops : min= 216, max= 268, avg=255.80, stdev=14.20, samples=20 00:26:47.686 lat (msec) : 10=2.26%, 20=97.03%, 100=0.70% 00:26:47.686 cpu : usr=93.07%, sys=5.29%, ctx=11, majf=0, minf=9 00:26:47.686 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:47.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.686 issued rwts: total=2561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.686 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:47.686 filename0: (groupid=0, jobs=1): err= 0: pid=102859: Sun Dec 15 13:40:51 2024 00:26:47.686 read: IOPS=186, BW=23.4MiB/s (24.5MB/s)(235MiB/10046msec) 00:26:47.686 slat (nsec): min=6462, max=71972, avg=12753.15, stdev=5205.06 00:26:47.686 clat (usec): min=8572, max=53194, avg=16017.96, stdev=1979.92 00:26:47.686 lat (usec): min=8590, max=53207, avg=16030.72, stdev=1980.19 00:26:47.686 clat percentiles (usec): 00:26:47.686 | 1.00th=[ 9241], 5.00th=[13960], 10.00th=[14746], 20.00th=[15270], 00:26:47.686 | 30.00th=[15664], 40.00th=[15926], 50.00th=[16188], 60.00th=[16450], 00:26:47.686 | 70.00th=[16712], 80.00th=[16909], 90.00th=[17433], 95.00th=[17957], 00:26:47.686 | 99.00th=[19006], 99.50th=[19268], 99.90th=[50070], 99.95th=[53216], 00:26:47.686 | 99.99th=[53216] 00:26:47.686 bw ( KiB/s): min=22528, max=27136, per=27.51%, avg=23987.20, stdev=1289.66, samples=20 00:26:47.686 iops : min= 176, max= 212, avg=187.40, stdev=10.08, samples=20 00:26:47.686 lat (msec) : 10=2.50%, 20=97.34%, 50=0.05%, 100=0.11% 00:26:47.686 cpu : usr=94.08%, sys=4.44%, ctx=89, majf=0, minf=9 00:26:47.686 IO depths : 1=4.2%, 2=95.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:47.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.686 issued rwts: total=1877,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.686 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:47.686 filename0: (groupid=0, jobs=1): err= 0: pid=102860: Sun Dec 15 13:40:51 2024 00:26:47.686 read: IOPS=240, BW=30.0MiB/s (31.5MB/s)(301MiB/10006msec) 00:26:47.686 slat (nsec): min=6622, max=68430, avg=11807.36, stdev=5006.61 00:26:47.686 clat (usec): min=6319, max=16787, avg=12464.17, stdev=1299.11 00:26:47.686 lat (usec): min=6329, max=16794, avg=12475.97, stdev=1299.41 00:26:47.686 clat percentiles (usec): 00:26:47.686 | 1.00th=[ 7570], 5.00th=[10683], 10.00th=[11338], 20.00th=[11731], 00:26:47.686 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12518], 60.00th=[12780], 00:26:47.686 | 70.00th=[13042], 80.00th=[13435], 90.00th=[13829], 95.00th=[14222], 00:26:47.686 | 99.00th=[15139], 99.50th=[15401], 99.90th=[16188], 99.95th=[16319], 00:26:47.686 | 99.99th=[16909] 00:26:47.686 bw ( KiB/s): min=28672, max=33792, per=35.27%, avg=30748.70, stdev=1044.50, samples=20 00:26:47.686 iops : min= 224, max= 264, avg=240.20, stdev= 8.15, samples=20 00:26:47.686 lat (msec) : 10=3.78%, 20=96.22% 00:26:47.686 cpu : usr=93.29%, sys=5.15%, ctx=33, majf=0, minf=9 00:26:47.686 IO depths : 1=2.0%, 2=98.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:47.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.686 issued rwts: total=2405,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.686 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:47.686 00:26:47.686 Run status group 0 (all jobs): 00:26:47.686 READ: bw=85.1MiB/s (89.3MB/s), 23.4MiB/s-32.0MiB/s (24.5MB/s-33.5MB/s), io=855MiB (897MB), run=10006-10046msec 00:26:47.686 13:40:51 -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:47.686 13:40:51 -- target/dif.sh@43 -- # local sub 00:26:47.686 13:40:51 -- target/dif.sh@45 -- # for sub in "$@" 00:26:47.686 13:40:51 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:47.686 13:40:51 -- target/dif.sh@36 -- # local sub_id=0 00:26:47.686 13:40:51 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:47.686 13:40:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.686 13:40:51 -- common/autotest_common.sh@10 -- # set +x 00:26:47.687 13:40:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.687 13:40:51 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:47.687 13:40:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.687 13:40:51 -- common/autotest_common.sh@10 -- # set +x 00:26:47.687 ************************************ 00:26:47.687 END TEST fio_dif_digest 00:26:47.687 ************************************ 00:26:47.687 13:40:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.687 00:26:47.687 real 0m10.984s 00:26:47.687 user 0m28.749s 00:26:47.687 sys 0m1.740s 00:26:47.687 13:40:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:47.687 13:40:51 -- common/autotest_common.sh@10 -- # set +x 00:26:47.687 13:40:51 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:47.687 13:40:51 -- target/dif.sh@147 -- # nvmftestfini 00:26:47.687 13:40:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:47.687 13:40:51 -- nvmf/common.sh@116 -- # sync 00:26:47.687 13:40:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:47.687 13:40:51 -- nvmf/common.sh@119 -- # set +e 00:26:47.687 13:40:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:47.687 13:40:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:47.687 rmmod nvme_tcp 00:26:47.687 rmmod nvme_fabrics 00:26:47.687 rmmod nvme_keyring 00:26:47.687 13:40:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:47.687 13:40:51 -- nvmf/common.sh@123 -- # set -e 00:26:47.687 13:40:51 -- nvmf/common.sh@124 -- # return 0 00:26:47.687 13:40:51 -- nvmf/common.sh@477 -- # '[' -n 102094 ']' 00:26:47.687 13:40:51 -- nvmf/common.sh@478 -- # killprocess 102094 00:26:47.687 13:40:51 -- common/autotest_common.sh@936 -- # '[' -z 102094 ']' 00:26:47.687 13:40:51 -- common/autotest_common.sh@940 -- # kill -0 102094 00:26:47.687 13:40:51 -- common/autotest_common.sh@941 -- # uname 00:26:47.687 13:40:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:47.687 13:40:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 102094 00:26:47.687 killing process with pid 102094 00:26:47.687 13:40:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:47.687 13:40:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:47.687 13:40:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 102094' 00:26:47.687 13:40:51 -- common/autotest_common.sh@955 -- # kill 102094 00:26:47.687 13:40:51 -- common/autotest_common.sh@960 -- # wait 102094 00:26:47.687 13:40:51 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:47.687 13:40:51 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:47.687 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:47.687 Waiting for block devices as requested 00:26:47.687 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:47.687 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:47.687 13:40:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:47.687 13:40:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:47.687 13:40:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:47.687 13:40:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:47.687 13:40:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.687 13:40:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:47.687 13:40:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.687 13:40:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:47.687 ************************************ 00:26:47.687 END TEST nvmf_dif 00:26:47.687 ************************************ 00:26:47.687 00:26:47.687 real 0m59.926s 00:26:47.687 user 3m51.322s 00:26:47.687 sys 0m14.667s 00:26:47.687 13:40:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:47.687 13:40:52 -- common/autotest_common.sh@10 -- # set +x 00:26:47.687 13:40:52 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:47.687 13:40:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:47.687 13:40:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:47.687 13:40:52 -- common/autotest_common.sh@10 -- # set +x 00:26:47.687 ************************************ 00:26:47.687 START TEST nvmf_abort_qd_sizes 00:26:47.687 ************************************ 00:26:47.687 13:40:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:47.687 * Looking for test storage... 00:26:47.687 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:47.687 13:40:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:47.687 13:40:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:47.687 13:40:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:47.687 13:40:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:47.687 13:40:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:47.687 13:40:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:47.687 13:40:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:47.687 13:40:52 -- scripts/common.sh@335 -- # IFS=.-: 00:26:47.687 13:40:52 -- scripts/common.sh@335 -- # read -ra ver1 00:26:47.687 13:40:52 -- scripts/common.sh@336 -- # IFS=.-: 00:26:47.687 13:40:52 -- scripts/common.sh@336 -- # read -ra ver2 00:26:47.687 13:40:52 -- scripts/common.sh@337 -- # local 'op=<' 00:26:47.687 13:40:52 -- scripts/common.sh@339 -- # ver1_l=2 00:26:47.687 13:40:52 -- scripts/common.sh@340 -- # ver2_l=1 00:26:47.687 13:40:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:47.687 13:40:52 -- scripts/common.sh@343 -- # case "$op" in 00:26:47.687 13:40:52 -- scripts/common.sh@344 -- # : 1 00:26:47.687 13:40:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:47.687 13:40:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:47.687 13:40:52 -- scripts/common.sh@364 -- # decimal 1 00:26:47.687 13:40:52 -- scripts/common.sh@352 -- # local d=1 00:26:47.687 13:40:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:47.687 13:40:52 -- scripts/common.sh@354 -- # echo 1 00:26:47.687 13:40:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:47.687 13:40:52 -- scripts/common.sh@365 -- # decimal 2 00:26:47.687 13:40:52 -- scripts/common.sh@352 -- # local d=2 00:26:47.687 13:40:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:47.687 13:40:52 -- scripts/common.sh@354 -- # echo 2 00:26:47.687 13:40:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:47.687 13:40:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:47.687 13:40:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:47.687 13:40:52 -- scripts/common.sh@367 -- # return 0 00:26:47.687 13:40:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:47.687 13:40:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:47.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.687 --rc genhtml_branch_coverage=1 00:26:47.687 --rc genhtml_function_coverage=1 00:26:47.687 --rc genhtml_legend=1 00:26:47.687 --rc geninfo_all_blocks=1 00:26:47.687 --rc geninfo_unexecuted_blocks=1 00:26:47.687 00:26:47.687 ' 00:26:47.687 13:40:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:47.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.687 --rc genhtml_branch_coverage=1 00:26:47.687 --rc genhtml_function_coverage=1 00:26:47.687 --rc genhtml_legend=1 00:26:47.687 --rc geninfo_all_blocks=1 00:26:47.687 --rc geninfo_unexecuted_blocks=1 00:26:47.687 00:26:47.687 ' 00:26:47.687 13:40:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:47.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.687 --rc genhtml_branch_coverage=1 00:26:47.687 --rc genhtml_function_coverage=1 00:26:47.687 --rc genhtml_legend=1 00:26:47.687 --rc geninfo_all_blocks=1 00:26:47.687 --rc geninfo_unexecuted_blocks=1 00:26:47.687 00:26:47.687 ' 00:26:47.687 13:40:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:47.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:47.687 --rc genhtml_branch_coverage=1 00:26:47.687 --rc genhtml_function_coverage=1 00:26:47.687 --rc genhtml_legend=1 00:26:47.687 --rc geninfo_all_blocks=1 00:26:47.687 --rc geninfo_unexecuted_blocks=1 00:26:47.687 00:26:47.687 ' 00:26:47.687 13:40:52 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:47.687 13:40:52 -- nvmf/common.sh@7 -- # uname -s 00:26:47.687 13:40:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:47.687 13:40:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:47.687 13:40:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:47.687 13:40:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:47.687 13:40:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:47.687 13:40:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:47.687 13:40:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:47.687 13:40:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:47.687 13:40:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:47.687 13:40:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:47.687 13:40:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 00:26:47.687 13:40:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=245f2070-11fd-4cc8-92e9-20ee097dca35 00:26:47.687 13:40:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:47.687 13:40:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:47.687 13:40:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:47.687 13:40:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:47.687 13:40:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:47.687 13:40:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:47.687 13:40:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:47.688 13:40:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.688 13:40:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.688 13:40:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.688 13:40:52 -- paths/export.sh@5 -- # export PATH 00:26:47.688 13:40:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.688 13:40:52 -- nvmf/common.sh@46 -- # : 0 00:26:47.688 13:40:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:47.688 13:40:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:47.688 13:40:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:47.688 13:40:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:47.688 13:40:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:47.688 13:40:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:47.688 13:40:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:47.688 13:40:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:47.688 13:40:52 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:26:47.688 13:40:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:47.688 13:40:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:47.688 13:40:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:47.688 13:40:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:47.688 13:40:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:47.688 13:40:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.688 13:40:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:47.688 13:40:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.688 13:40:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:47.688 13:40:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:47.688 13:40:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:47.688 13:40:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:47.688 13:40:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:47.688 13:40:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:47.688 13:40:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:47.688 13:40:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:47.688 13:40:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:47.688 13:40:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:47.688 13:40:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:47.688 13:40:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:47.688 13:40:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:47.688 13:40:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:47.688 13:40:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:47.688 13:40:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:47.688 13:40:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:47.688 13:40:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:47.688 13:40:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:47.688 13:40:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:47.688 Cannot find device "nvmf_tgt_br" 00:26:47.688 13:40:52 -- nvmf/common.sh@154 -- # true 00:26:47.688 13:40:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:47.688 Cannot find device "nvmf_tgt_br2" 00:26:47.688 13:40:52 -- nvmf/common.sh@155 -- # true 00:26:47.688 13:40:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:47.688 13:40:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:47.688 Cannot find device "nvmf_tgt_br" 00:26:47.688 13:40:52 -- nvmf/common.sh@157 -- # true 00:26:47.688 13:40:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:47.688 Cannot find device "nvmf_tgt_br2" 00:26:47.688 13:40:52 -- nvmf/common.sh@158 -- # true 00:26:47.688 13:40:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:47.688 13:40:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:47.688 13:40:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:47.688 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:47.688 13:40:52 -- nvmf/common.sh@161 -- # true 00:26:47.688 13:40:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:47.688 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:47.688 13:40:52 -- nvmf/common.sh@162 -- # true 00:26:47.688 13:40:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:47.688 13:40:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:47.688 13:40:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:47.688 13:40:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:47.688 13:40:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:47.688 13:40:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:47.688 13:40:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:47.688 13:40:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:47.688 13:40:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:47.688 13:40:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:47.688 13:40:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:47.688 13:40:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:47.688 13:40:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:47.688 13:40:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:47.688 13:40:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:47.688 13:40:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:47.688 13:40:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:47.688 13:40:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:47.688 13:40:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:47.688 13:40:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:47.688 13:40:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:47.688 13:40:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:47.688 13:40:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:47.688 13:40:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:47.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:47.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:26:47.688 00:26:47.688 --- 10.0.0.2 ping statistics --- 00:26:47.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.688 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:26:47.688 13:40:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:47.688 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:47.688 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:26:47.688 00:26:47.688 --- 10.0.0.3 ping statistics --- 00:26:47.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.688 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:26:47.688 13:40:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:47.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:47.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:26:47.688 00:26:47.688 --- 10.0.0.1 ping statistics --- 00:26:47.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.688 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:26:47.688 13:40:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:47.688 13:40:53 -- nvmf/common.sh@421 -- # return 0 00:26:47.688 13:40:53 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:26:47.688 13:40:53 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:47.947 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:48.206 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:26:48.206 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:26:48.206 13:40:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:48.206 13:40:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:48.206 13:40:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:48.206 13:40:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:48.206 13:40:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:48.206 13:40:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:48.206 13:40:53 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:26:48.206 13:40:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:48.206 13:40:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:48.206 13:40:53 -- common/autotest_common.sh@10 -- # set +x 00:26:48.206 13:40:53 -- nvmf/common.sh@469 -- # nvmfpid=103457 00:26:48.206 13:40:53 -- nvmf/common.sh@470 -- # waitforlisten 103457 00:26:48.206 13:40:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:48.206 13:40:53 -- common/autotest_common.sh@829 -- # '[' -z 103457 ']' 00:26:48.206 13:40:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.206 13:40:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:48.206 13:40:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.206 13:40:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:48.206 13:40:53 -- common/autotest_common.sh@10 -- # set +x 00:26:48.465 [2024-12-15 13:40:53.903143] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:48.465 [2024-12-15 13:40:53.903395] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:48.465 [2024-12-15 13:40:54.046768] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:48.465 [2024-12-15 13:40:54.115524] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:48.465 [2024-12-15 13:40:54.116047] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:48.465 [2024-12-15 13:40:54.116227] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:48.465 [2024-12-15 13:40:54.116454] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:48.465 [2024-12-15 13:40:54.116777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.465 [2024-12-15 13:40:54.116845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:48.465 [2024-12-15 13:40:54.116909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.465 [2024-12-15 13:40:54.116906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:49.400 13:40:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:49.400 13:40:54 -- common/autotest_common.sh@862 -- # return 0 00:26:49.400 13:40:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:49.400 13:40:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:49.400 13:40:54 -- common/autotest_common.sh@10 -- # set +x 00:26:49.400 13:40:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:49.400 13:40:54 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:49.400 13:40:54 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:26:49.400 13:40:54 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:26:49.400 13:40:54 -- scripts/common.sh@311 -- # local bdf bdfs 00:26:49.400 13:40:54 -- scripts/common.sh@312 -- # local nvmes 00:26:49.400 13:40:54 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:26:49.400 13:40:54 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:26:49.400 13:40:54 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:26:49.400 13:40:54 -- scripts/common.sh@297 -- # local bdf= 00:26:49.400 13:40:54 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:26:49.400 13:40:54 -- scripts/common.sh@232 -- # local class 00:26:49.400 13:40:54 -- scripts/common.sh@233 -- # local subclass 00:26:49.400 13:40:54 -- scripts/common.sh@234 -- # local progif 00:26:49.400 13:40:54 -- scripts/common.sh@235 -- # printf %02x 1 00:26:49.400 13:40:54 -- scripts/common.sh@235 -- # class=01 00:26:49.400 13:40:54 -- scripts/common.sh@236 -- # printf %02x 8 00:26:49.400 13:40:54 -- scripts/common.sh@236 -- # subclass=08 00:26:49.400 13:40:54 -- scripts/common.sh@237 -- # printf %02x 2 00:26:49.400 13:40:54 -- scripts/common.sh@237 -- # progif=02 00:26:49.400 13:40:54 -- scripts/common.sh@239 -- # hash lspci 00:26:49.400 13:40:54 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:26:49.400 13:40:54 -- scripts/common.sh@242 -- # grep -i -- -p02 00:26:49.400 13:40:54 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:26:49.400 13:40:54 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:26:49.400 13:40:54 -- scripts/common.sh@244 -- # tr -d '"' 00:26:49.400 13:40:54 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:49.400 13:40:54 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:26:49.400 13:40:54 -- scripts/common.sh@15 -- # local i 00:26:49.400 13:40:54 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:26:49.400 13:40:54 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:49.400 13:40:54 -- scripts/common.sh@24 -- # return 0 00:26:49.400 13:40:54 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:26:49.400 13:40:54 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:49.400 13:40:54 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:26:49.400 13:40:54 -- scripts/common.sh@15 -- # local i 00:26:49.400 13:40:54 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:26:49.400 13:40:54 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:49.400 13:40:54 -- scripts/common.sh@24 -- # return 0 00:26:49.400 13:40:54 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:26:49.400 13:40:55 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:49.400 13:40:55 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:26:49.400 13:40:55 -- scripts/common.sh@322 -- # uname -s 00:26:49.400 13:40:55 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:49.400 13:40:55 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:49.400 13:40:55 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:49.400 13:40:55 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:26:49.400 13:40:55 -- scripts/common.sh@322 -- # uname -s 00:26:49.400 13:40:55 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:49.400 13:40:55 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:49.400 13:40:55 -- scripts/common.sh@327 -- # (( 2 )) 00:26:49.400 13:40:55 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:26:49.400 13:40:55 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:26:49.400 13:40:55 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:26:49.400 13:40:55 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:26:49.400 13:40:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:49.400 13:40:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:49.400 13:40:55 -- common/autotest_common.sh@10 -- # set +x 00:26:49.400 ************************************ 00:26:49.400 START TEST spdk_target_abort 00:26:49.400 ************************************ 00:26:49.400 13:40:55 -- common/autotest_common.sh@1114 -- # spdk_target 00:26:49.400 13:40:55 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:49.400 13:40:55 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:49.400 13:40:55 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:26:49.400 13:40:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.400 13:40:55 -- common/autotest_common.sh@10 -- # set +x 00:26:49.659 spdk_targetn1 00:26:49.659 13:40:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.659 13:40:55 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:49.659 13:40:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.659 13:40:55 -- common/autotest_common.sh@10 -- # set +x 00:26:49.659 [2024-12-15 13:40:55.109000] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:49.659 13:40:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.659 13:40:55 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:26:49.659 13:40:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.659 13:40:55 -- common/autotest_common.sh@10 -- # set +x 00:26:49.659 13:40:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.659 13:40:55 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:26:49.659 13:40:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.659 13:40:55 -- common/autotest_common.sh@10 -- # set +x 00:26:49.659 13:40:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.659 13:40:55 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:26:49.659 13:40:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.659 13:40:55 -- common/autotest_common.sh@10 -- # set +x 00:26:49.659 [2024-12-15 13:40:55.141152] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:49.659 13:40:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.659 13:40:55 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:26:49.659 13:40:55 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:49.659 13:40:55 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:49.659 13:40:55 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:49.659 13:40:55 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:49.659 13:40:55 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:49.659 13:40:55 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:49.659 13:40:55 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:49.659 13:40:55 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:49.659 13:40:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:49.659 13:40:55 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:49.659 13:40:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:49.659 13:40:55 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:49.659 13:40:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:49.659 13:40:55 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:49.659 13:40:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:49.659 13:40:55 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:49.659 13:40:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:49.659 13:40:55 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:49.659 13:40:55 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:49.659 13:40:55 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:52.944 Initializing NVMe Controllers 00:26:52.944 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:52.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:52.944 Initialization complete. Launching workers. 00:26:52.944 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10506, failed: 0 00:26:52.944 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1028, failed to submit 9478 00:26:52.944 success 751, unsuccess 277, failed 0 00:26:52.944 13:40:58 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:52.944 13:40:58 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:56.232 Initializing NVMe Controllers 00:26:56.232 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:56.232 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:56.232 Initialization complete. Launching workers. 00:26:56.232 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 5951, failed: 0 00:26:56.232 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1232, failed to submit 4719 00:26:56.232 success 278, unsuccess 954, failed 0 00:26:56.232 13:41:01 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:56.232 13:41:01 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:59.526 Initializing NVMe Controllers 00:26:59.526 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:59.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:59.526 Initialization complete. Launching workers. 00:26:59.526 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31449, failed: 0 00:26:59.526 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2742, failed to submit 28707 00:26:59.526 success 393, unsuccess 2349, failed 0 00:26:59.526 13:41:04 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:26:59.526 13:41:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.526 13:41:04 -- common/autotest_common.sh@10 -- # set +x 00:26:59.526 13:41:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.526 13:41:04 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:59.526 13:41:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.526 13:41:04 -- common/autotest_common.sh@10 -- # set +x 00:26:59.785 13:41:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.785 13:41:05 -- target/abort_qd_sizes.sh@62 -- # killprocess 103457 00:26:59.785 13:41:05 -- common/autotest_common.sh@936 -- # '[' -z 103457 ']' 00:26:59.785 13:41:05 -- common/autotest_common.sh@940 -- # kill -0 103457 00:26:59.785 13:41:05 -- common/autotest_common.sh@941 -- # uname 00:26:59.785 13:41:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:59.785 13:41:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 103457 00:26:59.785 13:41:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:59.785 13:41:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:59.785 killing process with pid 103457 00:26:59.785 13:41:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 103457' 00:26:59.785 13:41:05 -- common/autotest_common.sh@955 -- # kill 103457 00:26:59.785 13:41:05 -- common/autotest_common.sh@960 -- # wait 103457 00:27:00.042 00:27:00.042 real 0m10.613s 00:27:00.042 user 0m43.802s 00:27:00.042 sys 0m1.590s 00:27:00.042 13:41:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:00.042 13:41:05 -- common/autotest_common.sh@10 -- # set +x 00:27:00.042 ************************************ 00:27:00.042 END TEST spdk_target_abort 00:27:00.042 ************************************ 00:27:00.042 13:41:05 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:27:00.042 13:41:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:00.042 13:41:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:00.042 13:41:05 -- common/autotest_common.sh@10 -- # set +x 00:27:00.042 ************************************ 00:27:00.042 START TEST kernel_target_abort 00:27:00.042 ************************************ 00:27:00.042 13:41:05 -- common/autotest_common.sh@1114 -- # kernel_target 00:27:00.042 13:41:05 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:27:00.042 13:41:05 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:27:00.042 13:41:05 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:27:00.042 13:41:05 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:27:00.042 13:41:05 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:27:00.042 13:41:05 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:27:00.042 13:41:05 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:00.042 13:41:05 -- nvmf/common.sh@627 -- # local block nvme 00:27:00.042 13:41:05 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:27:00.043 13:41:05 -- nvmf/common.sh@630 -- # modprobe nvmet 00:27:00.043 13:41:05 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:00.043 13:41:05 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:00.675 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:00.675 Waiting for block devices as requested 00:27:00.675 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:27:00.675 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:27:00.675 13:41:06 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:27:00.675 13:41:06 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:00.675 13:41:06 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:27:00.675 13:41:06 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:27:00.675 13:41:06 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:27:00.675 No valid GPT data, bailing 00:27:00.675 13:41:06 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:00.675 13:41:06 -- scripts/common.sh@393 -- # pt= 00:27:00.675 13:41:06 -- scripts/common.sh@394 -- # return 1 00:27:00.675 13:41:06 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:27:00.675 13:41:06 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:27:00.675 13:41:06 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:27:00.675 13:41:06 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:27:00.675 13:41:06 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:27:00.675 13:41:06 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:27:00.934 No valid GPT data, bailing 00:27:00.934 13:41:06 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:27:00.934 13:41:06 -- scripts/common.sh@393 -- # pt= 00:27:00.934 13:41:06 -- scripts/common.sh@394 -- # return 1 00:27:00.934 13:41:06 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:27:00.934 13:41:06 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:27:00.934 13:41:06 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:27:00.934 13:41:06 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:27:00.934 13:41:06 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:27:00.934 13:41:06 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:27:00.934 No valid GPT data, bailing 00:27:00.934 13:41:06 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:27:00.934 13:41:06 -- scripts/common.sh@393 -- # pt= 00:27:00.934 13:41:06 -- scripts/common.sh@394 -- # return 1 00:27:00.934 13:41:06 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:27:00.934 13:41:06 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:27:00.934 13:41:06 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:27:00.934 13:41:06 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:27:00.934 13:41:06 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:27:00.934 13:41:06 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:27:00.934 No valid GPT data, bailing 00:27:00.934 13:41:06 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:27:00.934 13:41:06 -- scripts/common.sh@393 -- # pt= 00:27:00.934 13:41:06 -- scripts/common.sh@394 -- # return 1 00:27:00.934 13:41:06 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:27:00.934 13:41:06 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:27:00.934 13:41:06 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:27:00.934 13:41:06 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:27:00.934 13:41:06 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:00.934 13:41:06 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:27:00.934 13:41:06 -- nvmf/common.sh@654 -- # echo 1 00:27:00.934 13:41:06 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:27:00.934 13:41:06 -- nvmf/common.sh@656 -- # echo 1 00:27:00.934 13:41:06 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:27:00.934 13:41:06 -- nvmf/common.sh@663 -- # echo tcp 00:27:00.934 13:41:06 -- nvmf/common.sh@664 -- # echo 4420 00:27:00.934 13:41:06 -- nvmf/common.sh@665 -- # echo ipv4 00:27:00.934 13:41:06 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:00.934 13:41:06 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:245f2070-11fd-4cc8-92e9-20ee097dca35 --hostid=245f2070-11fd-4cc8-92e9-20ee097dca35 -a 10.0.0.1 -t tcp -s 4420 00:27:00.934 00:27:00.934 Discovery Log Number of Records 2, Generation counter 2 00:27:00.934 =====Discovery Log Entry 0====== 00:27:00.934 trtype: tcp 00:27:00.934 adrfam: ipv4 00:27:00.934 subtype: current discovery subsystem 00:27:00.934 treq: not specified, sq flow control disable supported 00:27:00.934 portid: 1 00:27:00.934 trsvcid: 4420 00:27:00.934 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:00.934 traddr: 10.0.0.1 00:27:00.934 eflags: none 00:27:00.934 sectype: none 00:27:00.934 =====Discovery Log Entry 1====== 00:27:00.934 trtype: tcp 00:27:00.934 adrfam: ipv4 00:27:00.934 subtype: nvme subsystem 00:27:00.934 treq: not specified, sq flow control disable supported 00:27:00.934 portid: 1 00:27:00.934 trsvcid: 4420 00:27:00.934 subnqn: kernel_target 00:27:00.934 traddr: 10.0.0.1 00:27:00.934 eflags: none 00:27:00.934 sectype: none 00:27:00.934 13:41:06 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:27:00.934 13:41:06 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:00.934 13:41:06 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:00.934 13:41:06 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:00.934 13:41:06 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:00.934 13:41:06 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:27:00.934 13:41:06 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:00.934 13:41:06 -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:00.934 13:41:06 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:00.934 13:41:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:00.934 13:41:06 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:00.934 13:41:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:00.934 13:41:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:00.934 13:41:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:00.934 13:41:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:00.934 13:41:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:00.934 13:41:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:00.934 13:41:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:00.934 13:41:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:00.934 13:41:06 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:00.934 13:41:06 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:04.223 Initializing NVMe Controllers 00:27:04.223 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:04.223 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:04.223 Initialization complete. Launching workers. 00:27:04.223 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 30738, failed: 0 00:27:04.223 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 30738, failed to submit 0 00:27:04.223 success 0, unsuccess 30738, failed 0 00:27:04.223 13:41:09 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:04.223 13:41:09 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:07.516 Initializing NVMe Controllers 00:27:07.516 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:07.516 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:07.516 Initialization complete. Launching workers. 00:27:07.516 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 67137, failed: 0 00:27:07.516 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 28141, failed to submit 38996 00:27:07.516 success 0, unsuccess 28141, failed 0 00:27:07.516 13:41:12 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:07.517 13:41:12 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:10.807 Initializing NVMe Controllers 00:27:10.807 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:10.807 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:10.807 Initialization complete. Launching workers. 00:27:10.807 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 75978, failed: 0 00:27:10.807 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 18978, failed to submit 57000 00:27:10.807 success 0, unsuccess 18978, failed 0 00:27:10.807 13:41:16 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:27:10.807 13:41:16 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:27:10.807 13:41:16 -- nvmf/common.sh@677 -- # echo 0 00:27:10.807 13:41:16 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:27:10.807 13:41:16 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:27:10.807 13:41:16 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:10.808 13:41:16 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:27:10.808 13:41:16 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:27:10.808 13:41:16 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:27:10.808 00:27:10.808 real 0m10.495s 00:27:10.808 user 0m5.339s 00:27:10.808 sys 0m2.436s 00:27:10.808 13:41:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:10.808 13:41:16 -- common/autotest_common.sh@10 -- # set +x 00:27:10.808 ************************************ 00:27:10.808 END TEST kernel_target_abort 00:27:10.808 ************************************ 00:27:10.808 13:41:16 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:27:10.808 13:41:16 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:27:10.808 13:41:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:10.808 13:41:16 -- nvmf/common.sh@116 -- # sync 00:27:10.808 13:41:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:10.808 13:41:16 -- nvmf/common.sh@119 -- # set +e 00:27:10.808 13:41:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:10.808 13:41:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:10.808 rmmod nvme_tcp 00:27:10.808 rmmod nvme_fabrics 00:27:10.808 rmmod nvme_keyring 00:27:10.808 13:41:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:10.808 13:41:16 -- nvmf/common.sh@123 -- # set -e 00:27:10.808 13:41:16 -- nvmf/common.sh@124 -- # return 0 00:27:10.808 13:41:16 -- nvmf/common.sh@477 -- # '[' -n 103457 ']' 00:27:10.808 13:41:16 -- nvmf/common.sh@478 -- # killprocess 103457 00:27:10.808 13:41:16 -- common/autotest_common.sh@936 -- # '[' -z 103457 ']' 00:27:10.808 13:41:16 -- common/autotest_common.sh@940 -- # kill -0 103457 00:27:10.808 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (103457) - No such process 00:27:10.808 Process with pid 103457 is not found 00:27:10.808 13:41:16 -- common/autotest_common.sh@963 -- # echo 'Process with pid 103457 is not found' 00:27:10.808 13:41:16 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:27:10.808 13:41:16 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:11.432 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:11.432 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:11.432 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:11.432 13:41:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:11.432 13:41:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:11.432 13:41:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:11.432 13:41:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:11.432 13:41:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.432 13:41:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:11.432 13:41:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.432 13:41:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:27:11.432 00:27:11.432 real 0m24.592s 00:27:11.432 user 0m50.569s 00:27:11.432 sys 0m5.313s 00:27:11.432 13:41:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:11.432 13:41:17 -- common/autotest_common.sh@10 -- # set +x 00:27:11.432 ************************************ 00:27:11.432 END TEST nvmf_abort_qd_sizes 00:27:11.432 ************************************ 00:27:11.432 13:41:17 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:27:11.432 13:41:17 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:27:11.432 13:41:17 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:27:11.432 13:41:17 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:27:11.432 13:41:17 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:27:11.432 13:41:17 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:27:11.432 13:41:17 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:27:11.432 13:41:17 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:27:11.432 13:41:17 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:27:11.432 13:41:17 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:27:11.432 13:41:17 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:11.432 13:41:17 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:27:11.432 13:41:17 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:27:11.432 13:41:17 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:27:11.432 13:41:17 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:27:11.432 13:41:17 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:27:11.432 13:41:17 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:27:11.432 13:41:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:11.432 13:41:17 -- common/autotest_common.sh@10 -- # set +x 00:27:11.691 13:41:17 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:27:11.691 13:41:17 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:27:11.691 13:41:17 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:27:11.691 13:41:17 -- common/autotest_common.sh@10 -- # set +x 00:27:13.066 INFO: APP EXITING 00:27:13.066 INFO: killing all VMs 00:27:13.066 INFO: killing vhost app 00:27:13.066 INFO: EXIT DONE 00:27:13.633 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:13.892 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:13.892 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:14.460 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:14.460 Cleaning 00:27:14.460 Removing: /var/run/dpdk/spdk0/config 00:27:14.460 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:14.460 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:14.460 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:14.460 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:14.460 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:14.460 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:14.460 Removing: /var/run/dpdk/spdk1/config 00:27:14.460 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:14.460 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:14.460 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:14.460 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:14.719 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:14.719 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:14.719 Removing: /var/run/dpdk/spdk2/config 00:27:14.719 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:14.719 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:14.719 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:14.719 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:14.719 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:14.719 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:14.719 Removing: /var/run/dpdk/spdk3/config 00:27:14.719 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:14.719 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:14.719 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:14.719 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:14.719 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:14.719 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:14.719 Removing: /var/run/dpdk/spdk4/config 00:27:14.719 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:14.719 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:14.719 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:14.719 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:14.719 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:14.719 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:14.719 Removing: /dev/shm/nvmf_trace.0 00:27:14.719 Removing: /dev/shm/spdk_tgt_trace.pid67555 00:27:14.719 Removing: /var/run/dpdk/spdk0 00:27:14.719 Removing: /var/run/dpdk/spdk1 00:27:14.719 Removing: /var/run/dpdk/spdk2 00:27:14.719 Removing: /var/run/dpdk/spdk3 00:27:14.719 Removing: /var/run/dpdk/spdk4 00:27:14.719 Removing: /var/run/dpdk/spdk_pid100429 00:27:14.719 Removing: /var/run/dpdk/spdk_pid100630 00:27:14.719 Removing: /var/run/dpdk/spdk_pid100925 00:27:14.719 Removing: /var/run/dpdk/spdk_pid101234 00:27:14.719 Removing: /var/run/dpdk/spdk_pid101791 00:27:14.719 Removing: /var/run/dpdk/spdk_pid101796 00:27:14.719 Removing: /var/run/dpdk/spdk_pid102169 00:27:14.719 Removing: /var/run/dpdk/spdk_pid102329 00:27:14.719 Removing: /var/run/dpdk/spdk_pid102487 00:27:14.719 Removing: /var/run/dpdk/spdk_pid102583 00:27:14.719 Removing: /var/run/dpdk/spdk_pid102739 00:27:14.719 Removing: /var/run/dpdk/spdk_pid102848 00:27:14.719 Removing: /var/run/dpdk/spdk_pid103527 00:27:14.719 Removing: /var/run/dpdk/spdk_pid103563 00:27:14.719 Removing: /var/run/dpdk/spdk_pid103598 00:27:14.719 Removing: /var/run/dpdk/spdk_pid103850 00:27:14.719 Removing: /var/run/dpdk/spdk_pid103880 00:27:14.719 Removing: /var/run/dpdk/spdk_pid103915 00:27:14.719 Removing: /var/run/dpdk/spdk_pid67403 00:27:14.719 Removing: /var/run/dpdk/spdk_pid67555 00:27:14.719 Removing: /var/run/dpdk/spdk_pid67876 00:27:14.719 Removing: /var/run/dpdk/spdk_pid68151 00:27:14.719 Removing: /var/run/dpdk/spdk_pid68323 00:27:14.719 Removing: /var/run/dpdk/spdk_pid68412 00:27:14.719 Removing: /var/run/dpdk/spdk_pid68511 00:27:14.719 Removing: /var/run/dpdk/spdk_pid68614 00:27:14.719 Removing: /var/run/dpdk/spdk_pid68647 00:27:14.719 Removing: /var/run/dpdk/spdk_pid68677 00:27:14.719 Removing: /var/run/dpdk/spdk_pid68751 00:27:14.719 Removing: /var/run/dpdk/spdk_pid68863 00:27:14.719 Removing: /var/run/dpdk/spdk_pid69500 00:27:14.719 Removing: /var/run/dpdk/spdk_pid69559 00:27:14.719 Removing: /var/run/dpdk/spdk_pid69628 00:27:14.719 Removing: /var/run/dpdk/spdk_pid69656 00:27:14.719 Removing: /var/run/dpdk/spdk_pid69735 00:27:14.719 Removing: /var/run/dpdk/spdk_pid69763 00:27:14.719 Removing: /var/run/dpdk/spdk_pid69836 00:27:14.719 Removing: /var/run/dpdk/spdk_pid69864 00:27:14.719 Removing: /var/run/dpdk/spdk_pid69916 00:27:14.719 Removing: /var/run/dpdk/spdk_pid69946 00:27:14.719 Removing: /var/run/dpdk/spdk_pid69992 00:27:14.719 Removing: /var/run/dpdk/spdk_pid70022 00:27:14.719 Removing: /var/run/dpdk/spdk_pid70181 00:27:14.719 Removing: /var/run/dpdk/spdk_pid70213 00:27:14.719 Removing: /var/run/dpdk/spdk_pid70290 00:27:14.719 Removing: /var/run/dpdk/spdk_pid70365 00:27:14.719 Removing: /var/run/dpdk/spdk_pid70384 00:27:14.719 Removing: /var/run/dpdk/spdk_pid70448 00:27:14.719 Removing: /var/run/dpdk/spdk_pid70462 00:27:14.719 Removing: /var/run/dpdk/spdk_pid70502 00:27:14.719 Removing: /var/run/dpdk/spdk_pid70516 00:27:14.978 Removing: /var/run/dpdk/spdk_pid70551 00:27:14.978 Removing: /var/run/dpdk/spdk_pid70570 00:27:14.978 Removing: /var/run/dpdk/spdk_pid70599 00:27:14.978 Removing: /var/run/dpdk/spdk_pid70619 00:27:14.978 Removing: /var/run/dpdk/spdk_pid70653 00:27:14.978 Removing: /var/run/dpdk/spdk_pid70669 00:27:14.978 Removing: /var/run/dpdk/spdk_pid70709 00:27:14.978 Removing: /var/run/dpdk/spdk_pid70723 00:27:14.978 Removing: /var/run/dpdk/spdk_pid70758 00:27:14.978 Removing: /var/run/dpdk/spdk_pid70777 00:27:14.978 Removing: /var/run/dpdk/spdk_pid70806 00:27:14.978 Removing: /var/run/dpdk/spdk_pid70826 00:27:14.978 Removing: /var/run/dpdk/spdk_pid70860 00:27:14.978 Removing: /var/run/dpdk/spdk_pid70880 00:27:14.978 Removing: /var/run/dpdk/spdk_pid70909 00:27:14.978 Removing: /var/run/dpdk/spdk_pid70928 00:27:14.978 Removing: /var/run/dpdk/spdk_pid70963 00:27:14.978 Removing: /var/run/dpdk/spdk_pid70977 00:27:14.978 Removing: /var/run/dpdk/spdk_pid71017 00:27:14.978 Removing: /var/run/dpdk/spdk_pid71031 00:27:14.978 Removing: /var/run/dpdk/spdk_pid71060 00:27:14.978 Removing: /var/run/dpdk/spdk_pid71085 00:27:14.978 Removing: /var/run/dpdk/spdk_pid71114 00:27:14.978 Removing: /var/run/dpdk/spdk_pid71133 00:27:14.978 Removing: /var/run/dpdk/spdk_pid71168 00:27:14.978 Removing: /var/run/dpdk/spdk_pid71182 00:27:14.978 Removing: /var/run/dpdk/spdk_pid71222 00:27:14.978 Removing: /var/run/dpdk/spdk_pid71236 00:27:14.978 Removing: /var/run/dpdk/spdk_pid71271 00:27:14.978 Removing: /var/run/dpdk/spdk_pid71293 00:27:14.978 Removing: /var/run/dpdk/spdk_pid71325 00:27:14.978 Removing: /var/run/dpdk/spdk_pid71348 00:27:14.978 Removing: /var/run/dpdk/spdk_pid71385 00:27:14.978 Removing: /var/run/dpdk/spdk_pid71405 00:27:14.978 Removing: /var/run/dpdk/spdk_pid71438 00:27:14.978 Removing: /var/run/dpdk/spdk_pid71453 00:27:14.978 Removing: /var/run/dpdk/spdk_pid71489 00:27:14.978 Removing: /var/run/dpdk/spdk_pid71566 00:27:14.978 Removing: /var/run/dpdk/spdk_pid71667 00:27:14.978 Removing: /var/run/dpdk/spdk_pid72099 00:27:14.978 Removing: /var/run/dpdk/spdk_pid79067 00:27:14.978 Removing: /var/run/dpdk/spdk_pid79410 00:27:14.978 Removing: /var/run/dpdk/spdk_pid81827 00:27:14.978 Removing: /var/run/dpdk/spdk_pid82217 00:27:14.978 Removing: /var/run/dpdk/spdk_pid82478 00:27:14.978 Removing: /var/run/dpdk/spdk_pid82523 00:27:14.978 Removing: /var/run/dpdk/spdk_pid82839 00:27:14.978 Removing: /var/run/dpdk/spdk_pid82889 00:27:14.978 Removing: /var/run/dpdk/spdk_pid83262 00:27:14.978 Removing: /var/run/dpdk/spdk_pid83797 00:27:14.978 Removing: /var/run/dpdk/spdk_pid84227 00:27:14.978 Removing: /var/run/dpdk/spdk_pid85203 00:27:14.978 Removing: /var/run/dpdk/spdk_pid86201 00:27:14.978 Removing: /var/run/dpdk/spdk_pid86320 00:27:14.978 Removing: /var/run/dpdk/spdk_pid86382 00:27:14.978 Removing: /var/run/dpdk/spdk_pid87863 00:27:14.978 Removing: /var/run/dpdk/spdk_pid88103 00:27:14.978 Removing: /var/run/dpdk/spdk_pid88556 00:27:14.978 Removing: /var/run/dpdk/spdk_pid88662 00:27:14.978 Removing: /var/run/dpdk/spdk_pid88821 00:27:14.978 Removing: /var/run/dpdk/spdk_pid88862 00:27:14.978 Removing: /var/run/dpdk/spdk_pid88908 00:27:14.978 Removing: /var/run/dpdk/spdk_pid88953 00:27:14.978 Removing: /var/run/dpdk/spdk_pid89116 00:27:14.978 Removing: /var/run/dpdk/spdk_pid89268 00:27:14.978 Removing: /var/run/dpdk/spdk_pid89530 00:27:14.978 Removing: /var/run/dpdk/spdk_pid89647 00:27:14.978 Removing: /var/run/dpdk/spdk_pid90066 00:27:14.978 Removing: /var/run/dpdk/spdk_pid90446 00:27:14.978 Removing: /var/run/dpdk/spdk_pid90448 00:27:14.978 Removing: /var/run/dpdk/spdk_pid92716 00:27:14.978 Removing: /var/run/dpdk/spdk_pid93020 00:27:14.978 Removing: /var/run/dpdk/spdk_pid93542 00:27:14.978 Removing: /var/run/dpdk/spdk_pid93544 00:27:14.978 Removing: /var/run/dpdk/spdk_pid93896 00:27:14.978 Removing: /var/run/dpdk/spdk_pid93910 00:27:14.978 Removing: /var/run/dpdk/spdk_pid93924 00:27:14.978 Removing: /var/run/dpdk/spdk_pid93955 00:27:14.978 Removing: /var/run/dpdk/spdk_pid93966 00:27:14.978 Removing: /var/run/dpdk/spdk_pid94112 00:27:14.978 Removing: /var/run/dpdk/spdk_pid94114 00:27:14.978 Removing: /var/run/dpdk/spdk_pid94222 00:27:14.978 Removing: /var/run/dpdk/spdk_pid94224 00:27:14.978 Removing: /var/run/dpdk/spdk_pid94332 00:27:14.978 Removing: /var/run/dpdk/spdk_pid94340 00:27:14.978 Removing: /var/run/dpdk/spdk_pid94824 00:27:14.978 Removing: /var/run/dpdk/spdk_pid94873 00:27:15.238 Removing: /var/run/dpdk/spdk_pid95024 00:27:15.238 Removing: /var/run/dpdk/spdk_pid95145 00:27:15.238 Removing: /var/run/dpdk/spdk_pid95551 00:27:15.238 Removing: /var/run/dpdk/spdk_pid95799 00:27:15.238 Removing: /var/run/dpdk/spdk_pid96304 00:27:15.238 Removing: /var/run/dpdk/spdk_pid96864 00:27:15.238 Removing: /var/run/dpdk/spdk_pid97340 00:27:15.238 Removing: /var/run/dpdk/spdk_pid97411 00:27:15.238 Removing: /var/run/dpdk/spdk_pid97488 00:27:15.238 Removing: /var/run/dpdk/spdk_pid97582 00:27:15.238 Removing: /var/run/dpdk/spdk_pid97736 00:27:15.238 Removing: /var/run/dpdk/spdk_pid97825 00:27:15.238 Removing: /var/run/dpdk/spdk_pid97917 00:27:15.238 Removing: /var/run/dpdk/spdk_pid98007 00:27:15.238 Removing: /var/run/dpdk/spdk_pid98356 00:27:15.238 Removing: /var/run/dpdk/spdk_pid99066 00:27:15.238 Clean 00:27:15.238 killing process with pid 61816 00:27:15.238 killing process with pid 61818 00:27:15.238 13:41:20 -- common/autotest_common.sh@1446 -- # return 0 00:27:15.238 13:41:20 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:27:15.238 13:41:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:15.238 13:41:20 -- common/autotest_common.sh@10 -- # set +x 00:27:15.238 13:41:20 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:27:15.238 13:41:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:15.238 13:41:20 -- common/autotest_common.sh@10 -- # set +x 00:27:15.238 13:41:20 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:15.497 13:41:20 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:15.497 13:41:20 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:15.497 13:41:20 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:27:15.497 13:41:20 -- spdk/autotest.sh@383 -- # hostname 00:27:15.497 13:41:20 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:15.497 geninfo: WARNING: invalid characters removed from testname! 00:27:37.455 13:41:41 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:38.831 13:41:44 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:41.361 13:41:46 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:43.903 13:41:49 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:45.808 13:41:51 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:48.341 13:41:53 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:50.246 13:41:55 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:50.246 13:41:55 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:27:50.246 13:41:55 -- common/autotest_common.sh@1690 -- $ lcov --version 00:27:50.246 13:41:55 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:27:50.246 13:41:55 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:27:50.246 13:41:55 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:27:50.246 13:41:55 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:27:50.246 13:41:55 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:27:50.246 13:41:55 -- scripts/common.sh@335 -- $ IFS=.-: 00:27:50.246 13:41:55 -- scripts/common.sh@335 -- $ read -ra ver1 00:27:50.246 13:41:55 -- scripts/common.sh@336 -- $ IFS=.-: 00:27:50.246 13:41:55 -- scripts/common.sh@336 -- $ read -ra ver2 00:27:50.246 13:41:55 -- scripts/common.sh@337 -- $ local 'op=<' 00:27:50.246 13:41:55 -- scripts/common.sh@339 -- $ ver1_l=2 00:27:50.246 13:41:55 -- scripts/common.sh@340 -- $ ver2_l=1 00:27:50.246 13:41:55 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:27:50.246 13:41:55 -- scripts/common.sh@343 -- $ case "$op" in 00:27:50.246 13:41:55 -- scripts/common.sh@344 -- $ : 1 00:27:50.246 13:41:55 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:27:50.246 13:41:55 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:50.246 13:41:55 -- scripts/common.sh@364 -- $ decimal 1 00:27:50.246 13:41:55 -- scripts/common.sh@352 -- $ local d=1 00:27:50.246 13:41:55 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:27:50.246 13:41:55 -- scripts/common.sh@354 -- $ echo 1 00:27:50.246 13:41:55 -- scripts/common.sh@364 -- $ ver1[v]=1 00:27:50.246 13:41:55 -- scripts/common.sh@365 -- $ decimal 2 00:27:50.246 13:41:55 -- scripts/common.sh@352 -- $ local d=2 00:27:50.246 13:41:55 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:27:50.246 13:41:55 -- scripts/common.sh@354 -- $ echo 2 00:27:50.505 13:41:55 -- scripts/common.sh@365 -- $ ver2[v]=2 00:27:50.505 13:41:55 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:27:50.505 13:41:55 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:27:50.505 13:41:55 -- scripts/common.sh@367 -- $ return 0 00:27:50.505 13:41:55 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:50.505 13:41:55 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:27:50.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.505 --rc genhtml_branch_coverage=1 00:27:50.505 --rc genhtml_function_coverage=1 00:27:50.505 --rc genhtml_legend=1 00:27:50.505 --rc geninfo_all_blocks=1 00:27:50.505 --rc geninfo_unexecuted_blocks=1 00:27:50.505 00:27:50.505 ' 00:27:50.505 13:41:55 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:27:50.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.505 --rc genhtml_branch_coverage=1 00:27:50.505 --rc genhtml_function_coverage=1 00:27:50.505 --rc genhtml_legend=1 00:27:50.505 --rc geninfo_all_blocks=1 00:27:50.505 --rc geninfo_unexecuted_blocks=1 00:27:50.505 00:27:50.505 ' 00:27:50.505 13:41:55 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:27:50.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.505 --rc genhtml_branch_coverage=1 00:27:50.505 --rc genhtml_function_coverage=1 00:27:50.505 --rc genhtml_legend=1 00:27:50.505 --rc geninfo_all_blocks=1 00:27:50.505 --rc geninfo_unexecuted_blocks=1 00:27:50.506 00:27:50.506 ' 00:27:50.506 13:41:55 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:27:50.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.506 --rc genhtml_branch_coverage=1 00:27:50.506 --rc genhtml_function_coverage=1 00:27:50.506 --rc genhtml_legend=1 00:27:50.506 --rc geninfo_all_blocks=1 00:27:50.506 --rc geninfo_unexecuted_blocks=1 00:27:50.506 00:27:50.506 ' 00:27:50.506 13:41:55 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:50.506 13:41:55 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:50.506 13:41:55 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:50.506 13:41:55 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:50.506 13:41:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.506 13:41:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.506 13:41:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.506 13:41:55 -- paths/export.sh@5 -- $ export PATH 00:27:50.506 13:41:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.506 13:41:55 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:50.506 13:41:55 -- common/autobuild_common.sh@440 -- $ date +%s 00:27:50.506 13:41:55 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734270115.XXXXXX 00:27:50.506 13:41:55 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734270115.gVhh2G 00:27:50.506 13:41:55 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:27:50.506 13:41:55 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:27:50.506 13:41:55 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:27:50.506 13:41:55 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:27:50.506 13:41:55 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:50.506 13:41:55 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:50.506 13:41:55 -- common/autobuild_common.sh@456 -- $ get_config_params 00:27:50.506 13:41:55 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:27:50.506 13:41:55 -- common/autotest_common.sh@10 -- $ set +x 00:27:50.506 13:41:55 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:27:50.506 13:41:55 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:27:50.506 13:41:55 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:27:50.506 13:41:55 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:50.506 13:41:55 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:27:50.506 13:41:55 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:50.506 13:41:55 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:50.506 13:41:55 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:50.506 13:41:55 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:50.506 13:41:55 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:50.506 13:41:56 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:50.506 + [[ -n 5967 ]] 00:27:50.506 + sudo kill 5967 00:27:50.516 [Pipeline] } 00:27:50.533 [Pipeline] // timeout 00:27:50.538 [Pipeline] } 00:27:50.553 [Pipeline] // stage 00:27:50.558 [Pipeline] } 00:27:50.587 [Pipeline] // catchError 00:27:50.598 [Pipeline] stage 00:27:50.603 [Pipeline] { (Stop VM) 00:27:50.616 [Pipeline] sh 00:27:50.907 + vagrant halt 00:27:54.194 ==> default: Halting domain... 00:28:00.874 [Pipeline] sh 00:28:01.153 + vagrant destroy -f 00:28:03.686 ==> default: Removing domain... 00:28:03.956 [Pipeline] sh 00:28:04.237 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:28:04.245 [Pipeline] } 00:28:04.260 [Pipeline] // stage 00:28:04.266 [Pipeline] } 00:28:04.280 [Pipeline] // dir 00:28:04.285 [Pipeline] } 00:28:04.299 [Pipeline] // wrap 00:28:04.305 [Pipeline] } 00:28:04.318 [Pipeline] // catchError 00:28:04.328 [Pipeline] stage 00:28:04.330 [Pipeline] { (Epilogue) 00:28:04.343 [Pipeline] sh 00:28:04.625 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:09.905 [Pipeline] catchError 00:28:09.907 [Pipeline] { 00:28:09.917 [Pipeline] sh 00:28:10.197 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:10.456 Artifacts sizes are good 00:28:10.464 [Pipeline] } 00:28:10.478 [Pipeline] // catchError 00:28:10.488 [Pipeline] archiveArtifacts 00:28:10.494 Archiving artifacts 00:28:10.629 [Pipeline] cleanWs 00:28:10.642 [WS-CLEANUP] Deleting project workspace... 00:28:10.642 [WS-CLEANUP] Deferred wipeout is used... 00:28:10.672 [WS-CLEANUP] done 00:28:10.674 [Pipeline] } 00:28:10.690 [Pipeline] // stage 00:28:10.695 [Pipeline] } 00:28:10.708 [Pipeline] // node 00:28:10.713 [Pipeline] End of Pipeline 00:28:10.753 Finished: SUCCESS